00:00:00.000 Started by upstream project "autotest-per-patch" build number 127125 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.121 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.175 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.236 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.197 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.206 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.217 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:08.217 > git config core.sparsecheckout # timeout=10 00:00:08.227 > git read-tree -mu HEAD # timeout=10 00:00:08.241 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:08.283 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:08.283 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:08.389 [Pipeline] Start of Pipeline 00:00:08.405 [Pipeline] library 00:00:08.407 Loading library shm_lib@master 00:00:08.407 Library shm_lib@master is cached. Copying from home. 00:00:08.425 [Pipeline] node 00:00:08.436 Running on VM-host-WFP7 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:08.438 [Pipeline] { 00:00:08.451 [Pipeline] catchError 00:00:08.453 [Pipeline] { 00:00:08.468 [Pipeline] wrap 00:00:08.479 [Pipeline] { 00:00:08.488 [Pipeline] stage 00:00:08.490 [Pipeline] { (Prologue) 00:00:08.510 [Pipeline] echo 00:00:08.511 Node: VM-host-WFP7 00:00:08.518 [Pipeline] cleanWs 00:00:08.527 [WS-CLEANUP] Deleting project workspace... 00:00:08.527 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.533 [WS-CLEANUP] done 00:00:08.712 [Pipeline] setCustomBuildProperty 00:00:08.799 [Pipeline] httpRequest 00:00:08.828 [Pipeline] echo 00:00:08.829 Sorcerer 10.211.164.101 is alive 00:00:08.835 [Pipeline] httpRequest 00:00:08.839 HttpMethod: GET 00:00:08.840 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.840 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.859 Response Code: HTTP/1.1 200 OK 00:00:08.859 Success: Status code 200 is in the accepted range: 200,404 00:00:08.860 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:26.623 [Pipeline] sh 00:00:26.898 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:26.914 [Pipeline] httpRequest 00:00:26.945 [Pipeline] echo 00:00:26.947 Sorcerer 10.211.164.101 is alive 00:00:26.954 [Pipeline] httpRequest 00:00:26.958 HttpMethod: GET 00:00:26.959 URL: http://10.211.164.101/packages/spdk_c8a637412a18bf815d83b29b821c23a45379404a.tar.gz 00:00:26.959 Sending request to url: http://10.211.164.101/packages/spdk_c8a637412a18bf815d83b29b821c23a45379404a.tar.gz 00:00:26.980 Response Code: HTTP/1.1 200 OK 00:00:26.980 Success: Status code 200 is in the accepted range: 200,404 00:00:26.980 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_c8a637412a18bf815d83b29b821c23a45379404a.tar.gz 00:01:06.364 [Pipeline] sh 00:01:06.660 + tar --no-same-owner -xf spdk_c8a637412a18bf815d83b29b821c23a45379404a.tar.gz 00:01:09.206 [Pipeline] sh 00:01:09.489 + git -C spdk log --oneline -n5 00:01:09.489 c8a637412 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:01:09.489 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:09.489 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:09.489 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:01:09.489 1d6dfcbeb nvme_pci: ctrlr_scan_attached callback 00:01:09.508 [Pipeline] writeFile 00:01:09.525 [Pipeline] sh 00:01:09.810 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.822 [Pipeline] sh 00:01:10.105 + cat autorun-spdk.conf 00:01:10.105 SPDK_TEST_UNITTEST=1 00:01:10.105 SPDK_RUN_VALGRIND=0 00:01:10.105 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.105 SPDK_TEST_NVME=1 00:01:10.105 SPDK_TEST_BLOCKDEV=1 00:01:10.105 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.112 RUN_NIGHTLY=0 00:01:10.114 [Pipeline] } 00:01:10.130 [Pipeline] // stage 00:01:10.145 [Pipeline] stage 00:01:10.147 [Pipeline] { (Run VM) 00:01:10.162 [Pipeline] sh 00:01:10.447 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.447 + echo 'Start stage prepare_nvme.sh' 00:01:10.447 Start stage prepare_nvme.sh 00:01:10.447 + [[ -n 2 ]] 00:01:10.447 + disk_prefix=ex2 00:01:10.447 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:01:10.447 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:01:10.447 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:01:10.447 ++ SPDK_TEST_UNITTEST=1 00:01:10.447 ++ SPDK_RUN_VALGRIND=0 00:01:10.447 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.447 ++ SPDK_TEST_NVME=1 00:01:10.447 ++ SPDK_TEST_BLOCKDEV=1 00:01:10.447 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.447 ++ RUN_NIGHTLY=0 00:01:10.447 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:10.447 + nvme_files=() 00:01:10.447 + declare -A nvme_files 00:01:10.447 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.447 + nvme_files['nvme.img']=5G 00:01:10.447 + nvme_files['nvme-cmb.img']=5G 00:01:10.447 + nvme_files['nvme-multi0.img']=4G 00:01:10.447 + nvme_files['nvme-multi1.img']=4G 00:01:10.447 + nvme_files['nvme-multi2.img']=4G 00:01:10.447 + nvme_files['nvme-openstack.img']=8G 00:01:10.447 + nvme_files['nvme-zns.img']=5G 00:01:10.447 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.447 + (( SPDK_TEST_FTL == 1 )) 00:01:10.447 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.447 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:10.447 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.447 + for nvme in "${!nvme_files[@]}" 00:01:10.447 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:10.707 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.707 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:10.707 + echo 'End stage prepare_nvme.sh' 00:01:10.707 End stage prepare_nvme.sh 00:01:10.720 [Pipeline] sh 00:01:11.004 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.004 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f freebsd14 00:01:11.004 00:01:11.004 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:01:11.004 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:01:11.004 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:01:11.004 HELP=0 00:01:11.004 DRY_RUN=0 00:01:11.004 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:11.004 NVME_DISKS_TYPE=nvme, 00:01:11.004 NVME_AUTO_CREATE=0 00:01:11.004 NVME_DISKS_NAMESPACES=, 00:01:11.004 NVME_CMB=, 00:01:11.004 NVME_PMR=, 00:01:11.004 NVME_ZNS=, 00:01:11.004 NVME_MS=, 00:01:11.004 NVME_FDP=, 00:01:11.004 SPDK_VAGRANT_DISTRO=freebsd14 00:01:11.004 SPDK_VAGRANT_VMCPU=10 00:01:11.004 SPDK_VAGRANT_VMRAM=14336 00:01:11.004 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.004 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.004 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.004 SPDK_OPENSTACK_NETWORK=0 00:01:11.004 VAGRANT_PACKAGE_BOX=0 00:01:11.004 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:11.004 FORCE_DISTRO=true 00:01:11.004 VAGRANT_BOX_VERSION= 00:01:11.004 EXTRA_VAGRANTFILES= 00:01:11.004 NIC_MODEL=virtio 00:01:11.004 00:01:11.004 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt' 00:01:11.004 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:13.542 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.801 ==> default: Creating image (snapshot of base box volume). 00:01:13.801 ==> default: Creating domain with the following settings... 00:01:13.801 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721874360_6d4d496ff2ca7c6c791a 00:01:13.801 ==> default: -- Domain type: kvm 00:01:13.801 ==> default: -- Cpus: 10 00:01:13.801 ==> default: -- Feature: acpi 00:01:13.801 ==> default: -- Feature: apic 00:01:13.801 ==> default: -- Feature: pae 00:01:13.801 ==> default: -- Memory: 14336M 00:01:13.801 ==> default: -- Memory Backing: hugepages: 00:01:13.801 ==> default: -- Management MAC: 00:01:13.801 ==> default: -- Loader: 00:01:13.801 ==> default: -- Nvram: 00:01:13.801 ==> default: -- Base box: spdk/freebsd14 00:01:13.801 ==> default: -- Storage pool: default 00:01:13.801 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721874360_6d4d496ff2ca7c6c791a.img (32G) 00:01:13.801 ==> default: -- Volume Cache: default 00:01:13.801 ==> default: -- Kernel: 00:01:13.801 ==> default: -- Initrd: 00:01:13.801 ==> default: -- Graphics Type: vnc 00:01:13.801 ==> default: -- Graphics Port: -1 00:01:13.801 ==> default: -- Graphics IP: 127.0.0.1 00:01:13.801 ==> default: -- Graphics Password: Not defined 00:01:13.801 ==> default: -- Video Type: cirrus 00:01:13.801 ==> default: -- Video VRAM: 9216 00:01:13.801 ==> default: -- Sound Type: 00:01:13.801 ==> default: -- Keymap: en-us 00:01:13.801 ==> default: -- TPM Path: 00:01:13.801 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:13.801 ==> default: -- Command line args: 00:01:13.801 ==> default: -> value=-device, 00:01:13.801 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:13.801 ==> default: -> value=-drive, 00:01:13.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:13.801 ==> default: -> value=-device, 00:01:13.801 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.059 ==> default: Creating shared folders metadata... 00:01:14.059 ==> default: Starting domain. 00:01:15.961 ==> default: Waiting for domain to get an IP address... 00:01:37.902 ==> default: Waiting for SSH to become available... 00:01:46.029 ==> default: Configuring and enabling network interfaces... 00:01:54.155 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.277 ==> default: Mounting SSHFS shared folder... 00:02:04.192 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.192 ==> default: Checking Mount.. 00:02:05.132 ==> default: Folder Successfully Mounted! 00:02:05.132 ==> default: Running provisioner: file... 00:02:06.519 default: ~/.gitconfig => .gitconfig 00:02:06.777 00:02:06.778 SUCCESS! 00:02:06.778 00:02:06.778 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt and type "vagrant ssh" to use. 00:02:06.778 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:06.778 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt" to destroy all trace of vm. 00:02:06.778 00:02:06.786 [Pipeline] } 00:02:06.803 [Pipeline] // stage 00:02:06.812 [Pipeline] dir 00:02:06.812 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt 00:02:06.814 [Pipeline] { 00:02:06.825 [Pipeline] catchError 00:02:06.826 [Pipeline] { 00:02:06.837 [Pipeline] sh 00:02:07.119 + vagrant ssh-config --host vagrant 00:02:07.119 + sed -ne /^Host/,$p 00:02:07.119 + tee ssh_conf 00:02:10.413 Host vagrant 00:02:10.413 HostName 192.168.121.161 00:02:10.413 User vagrant 00:02:10.413 Port 22 00:02:10.413 UserKnownHostsFile /dev/null 00:02:10.413 StrictHostKeyChecking no 00:02:10.413 PasswordAuthentication no 00:02:10.413 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:02:10.413 IdentitiesOnly yes 00:02:10.413 LogLevel FATAL 00:02:10.413 ForwardAgent yes 00:02:10.413 ForwardX11 yes 00:02:10.413 00:02:10.428 [Pipeline] withEnv 00:02:10.430 [Pipeline] { 00:02:10.446 [Pipeline] sh 00:02:10.730 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:10.730 source /etc/os-release 00:02:10.730 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.730 # Minimal, systemd-like check. 00:02:10.730 if [[ -e /.dockerenv ]]; then 00:02:10.730 # Clear garbage from the node's name: 00:02:10.730 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.730 # $HOSTNAME is the actual container id 00:02:10.730 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.730 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.730 # We can assume this is a mount from a host where container is running, 00:02:10.730 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.730 container="$(< /etc/hostname) ($agent)" 00:02:10.730 else 00:02:10.730 # Fallback 00:02:10.730 container=$agent 00:02:10.730 fi 00:02:10.730 fi 00:02:10.730 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.730 00:02:10.742 [Pipeline] } 00:02:10.810 [Pipeline] // withEnv 00:02:10.817 [Pipeline] setCustomBuildProperty 00:02:10.827 [Pipeline] stage 00:02:10.828 [Pipeline] { (Tests) 00:02:10.840 [Pipeline] sh 00:02:11.118 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.392 [Pipeline] sh 00:02:11.674 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:11.688 [Pipeline] timeout 00:02:11.688 Timeout set to expire in 1 hr 30 min 00:02:11.690 [Pipeline] { 00:02:11.706 [Pipeline] sh 00:02:12.006 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:12.954 HEAD is now at c8a637412 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:02:12.968 [Pipeline] sh 00:02:13.253 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:13.526 [Pipeline] sh 00:02:13.809 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.086 [Pipeline] sh 00:02:14.372 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:14.372 ++ readlink -f spdk_repo 00:02:14.372 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.372 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.372 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.372 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.372 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.372 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.372 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.372 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:14.372 + cd /home/vagrant/spdk_repo 00:02:14.372 + source /etc/os-release 00:02:14.372 ++ NAME=FreeBSD 00:02:14.372 ++ VERSION=14.0-RELEASE 00:02:14.372 ++ VERSION_ID=14.0 00:02:14.372 ++ ID=freebsd 00:02:14.372 ++ ANSI_COLOR='0;31' 00:02:14.372 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:14.372 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:14.372 ++ HOME_URL=https://FreeBSD.org/ 00:02:14.372 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:14.372 + uname -a 00:02:14.372 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:14.372 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:14.632 Contigmem (not present) 00:02:14.632 Buffer Size: not set 00:02:14.632 Num Buffers: not set 00:02:14.632 00:02:14.632 00:02:14.632 Type BDF Vendor Device Driver 00:02:14.632 NVMe 0:16:0 0x1b36 0x0010 nvme0 00:02:14.632 + rm -f /tmp/spdk-ld-path 00:02:14.632 + source autorun-spdk.conf 00:02:14.632 ++ SPDK_TEST_UNITTEST=1 00:02:14.632 ++ SPDK_RUN_VALGRIND=0 00:02:14.632 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.632 ++ SPDK_TEST_NVME=1 00:02:14.632 ++ SPDK_TEST_BLOCKDEV=1 00:02:14.632 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.632 ++ RUN_NIGHTLY=0 00:02:14.632 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.632 + [[ -n '' ]] 00:02:14.632 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.632 + for M in /var/spdk/build-*-manifest.txt 00:02:14.632 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.632 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.632 + for M in /var/spdk/build-*-manifest.txt 00:02:14.632 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.632 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.632 ++ uname 00:02:14.632 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:14.632 + dmesg_pid=1283 00:02:14.632 + tail -F /var/log/messages 00:02:14.632 + [[ FreeBSD == FreeBSD ]] 00:02:14.632 + export LC_ALL=C LC_CTYPE=C 00:02:14.632 + LC_ALL=C 00:02:14.632 + LC_CTYPE=C 00:02:14.632 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.632 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.632 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.632 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.632 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.632 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.632 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.632 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:14.632 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:14.632 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:14.632 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.632 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.632 Test configuration: 00:02:14.632 SPDK_TEST_UNITTEST=1 00:02:14.632 SPDK_RUN_VALGRIND=0 00:02:14.632 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.632 SPDK_TEST_NVME=1 00:02:14.632 SPDK_TEST_BLOCKDEV=1 00:02:14.632 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.892 RUN_NIGHTLY=0 02:27:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.892 02:27:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.892 02:27:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.892 02:27:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.892 02:27:01 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:14.892 02:27:01 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:14.892 02:27:01 -- paths/export.sh@4 -- $ export PATH 00:02:14.892 02:27:01 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:14.892 02:27:01 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.892 02:27:01 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:14.892 02:27:01 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721874421.XXXXXX 00:02:14.892 02:27:01 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721874421.XXXXXX.6Ip9YgVohq 00:02:14.892 02:27:01 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:14.892 02:27:01 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:14.892 02:27:01 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:14.892 02:27:01 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.892 02:27:01 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.892 02:27:01 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:14.892 02:27:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:14.892 02:27:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.892 02:27:01 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:14.892 02:27:01 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:14.892 02:27:01 -- pm/common@17 -- $ local monitor 00:02:14.892 02:27:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.892 02:27:01 -- pm/common@25 -- $ sleep 1 00:02:14.892 02:27:01 -- pm/common@21 -- $ date +%s 00:02:14.892 02:27:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721874421 00:02:14.892 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721874421_collect-vmstat.pm.log 00:02:16.270 02:27:02 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:16.270 02:27:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.270 02:27:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.270 02:27:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.270 02:27:02 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.270 Thu Jul 25 02:27:02 UTC 2024 00:02:16.270 02:27:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.270 v24.09-pre-303-gc8a637412 00:02:16.270 02:27:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.270 02:27:02 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:16.270 02:27:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:16.270 02:27:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:16.270 02:27:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:16.270 02:27:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:16.270 02:27:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:16.270 02:27:02 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:16.270 02:27:02 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:16.270 02:27:02 -- common/autobuild_common.sh@423 -- $ run_test unittest_build _unittest_build 00:02:16.270 02:27:02 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:16.270 02:27:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:16.270 02:27:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.270 ************************************ 00:02:16.270 START TEST unittest_build 00:02:16.270 ************************************ 00:02:16.270 02:27:02 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:16.270 02:27:02 unittest_build -- common/autobuild_common.sh@414 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:17.206 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:17.206 are only supported on Linux. Turning off default feature. 00:02:17.206 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.206 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:18.145 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:18.145 Using 'verbs' RDMA provider 00:02:33.283 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:43.279 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:43.279 Creating mk/config.mk...done. 00:02:43.279 Creating mk/cc.flags.mk...done. 00:02:43.279 Type 'gmake' to build. 00:02:43.279 02:27:29 unittest_build -- common/autobuild_common.sh@415 -- $ gmake -j10 00:02:43.279 gmake[1]: Nothing to be done for 'all'. 00:02:47.480 ps: stdin: not a terminal 00:02:52.746 The Meson build system 00:02:52.746 Version: 1.4.0 00:02:52.747 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.747 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.747 Build type: native build 00:02:52.747 Program cat found: YES (/bin/cat) 00:02:52.747 Project name: DPDK 00:02:52.747 Project version: 24.03.0 00:02:52.747 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:02:52.747 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:02:52.747 Host machine cpu family: x86_64 00:02:52.747 Host machine cpu: x86_64 00:02:52.747 Message: ## Building in Developer Mode ## 00:02:52.747 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:52.747 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.747 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.747 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:52.747 Program cat found: YES (/bin/cat) 00:02:52.747 Compiler for C supports arguments -march=native: YES 00:02:52.747 Checking for size of "void *" : 8 00:02:52.747 Checking for size of "void *" : 8 (cached) 00:02:52.747 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.747 Library m found: YES 00:02:52.747 Library numa found: NO 00:02:52.747 Library fdt found: NO 00:02:52.747 Library execinfo found: YES 00:02:52.747 Has header "execinfo.h" : YES 00:02:52.747 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:02:52.747 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.747 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.747 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.747 Run-time dependency openssl found: YES 3.0.13 00:02:52.747 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:52.747 Library pcap found: YES 00:02:52.747 Has header "pcap.h" with dependency -lpcap: YES 00:02:52.747 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.747 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.747 Compiler for C supports arguments -Wformat: YES 00:02:52.747 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:52.747 Compiler for C supports arguments -Wformat-security: YES 00:02:52.747 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.747 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.747 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.747 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.747 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.747 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.747 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.747 Compiler for C supports arguments -Wundef: YES 00:02:52.747 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.747 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.747 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:52.747 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.747 Compiler for C supports arguments -mavx512f: YES 00:02:52.747 Checking if "AVX512 checking" compiles: YES 00:02:52.747 Fetching value of define "__SSE4_2__" : 1 00:02:52.747 Fetching value of define "__AES__" : 1 00:02:52.747 Fetching value of define "__AVX__" : 1 00:02:52.747 Fetching value of define "__AVX2__" : 1 00:02:52.747 Fetching value of define "__AVX512BW__" : 1 00:02:52.747 Fetching value of define "__AVX512CD__" : 1 00:02:52.747 Fetching value of define "__AVX512DQ__" : 1 00:02:52.747 Fetching value of define "__AVX512F__" : 1 00:02:52.747 Fetching value of define "__AVX512VL__" : 1 00:02:52.747 Fetching value of define "__PCLMUL__" : 1 00:02:52.747 Fetching value of define "__RDRND__" : 1 00:02:52.747 Fetching value of define "__RDSEED__" : 1 00:02:52.747 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.747 Fetching value of define "__znver1__" : (undefined) 00:02:52.747 Fetching value of define "__znver2__" : (undefined) 00:02:52.747 Fetching value of define "__znver3__" : (undefined) 00:02:52.747 Fetching value of define "__znver4__" : (undefined) 00:02:52.747 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:52.747 Message: lib/log: Defining dependency "log" 00:02:52.747 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.747 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.747 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:52.747 Checking for function "getentropy" : YES 00:02:52.747 Message: lib/eal: Defining dependency "eal" 00:02:52.747 Message: lib/ring: Defining dependency "ring" 00:02:52.747 Message: lib/rcu: Defining dependency "rcu" 00:02:52.747 Message: lib/mempool: Defining dependency "mempool" 00:02:52.747 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.747 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.747 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.747 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.747 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.747 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.747 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:52.747 Compiler for C supports arguments -mpclmul: YES 00:02:52.747 Compiler for C supports arguments -maes: YES 00:02:52.747 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.747 Compiler for C supports arguments -mavx512bw: YES 00:02:52.747 Compiler for C supports arguments -mavx512dq: YES 00:02:52.747 Compiler for C supports arguments -mavx512vl: YES 00:02:52.747 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.747 Compiler for C supports arguments -mavx2: YES 00:02:52.747 Compiler for C supports arguments -mavx: YES 00:02:52.747 Message: lib/net: Defining dependency "net" 00:02:52.747 Message: lib/meter: Defining dependency "meter" 00:02:52.747 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.747 Message: lib/pci: Defining dependency "pci" 00:02:52.747 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.747 Message: lib/hash: Defining dependency "hash" 00:02:52.747 Message: lib/timer: Defining dependency "timer" 00:02:52.747 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.747 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.747 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.747 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.747 Message: lib/reorder: Defining dependency "reorder" 00:02:52.747 Message: lib/security: Defining dependency "security" 00:02:52.747 Has header "linux/userfaultfd.h" : NO 00:02:52.747 Has header "linux/vduse.h" : NO 00:02:52.747 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:52.747 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.747 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.747 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.747 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.747 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.747 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.747 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:52.747 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.747 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.747 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.747 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.747 Configuring doxy-api-html.conf using configuration 00:02:52.747 Configuring doxy-api-man.conf using configuration 00:02:52.747 Program mandb found: NO 00:02:52.747 Program sphinx-build found: NO 00:02:52.747 Configuring rte_build_config.h using configuration 00:02:52.747 Message: 00:02:52.747 ================= 00:02:52.747 Applications Enabled 00:02:52.747 ================= 00:02:52.747 00:02:52.747 apps: 00:02:52.747 00:02:52.747 00:02:52.747 Message: 00:02:52.747 ================= 00:02:52.747 Libraries Enabled 00:02:52.747 ================= 00:02:52.747 00:02:52.747 libs: 00:02:52.747 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.747 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.747 cryptodev, dmadev, reorder, security, 00:02:52.747 00:02:52.747 Message: 00:02:52.747 =============== 00:02:52.747 Drivers Enabled 00:02:52.747 =============== 00:02:52.747 00:02:52.747 common: 00:02:52.747 00:02:52.747 bus: 00:02:52.747 pci, vdev, 00:02:52.747 mempool: 00:02:52.747 ring, 00:02:52.747 dma: 00:02:52.747 00:02:52.747 net: 00:02:52.747 00:02:52.747 crypto: 00:02:52.747 00:02:52.747 compress: 00:02:52.747 00:02:52.747 00:02:52.747 Message: 00:02:52.747 ================= 00:02:52.747 Content Skipped 00:02:52.747 ================= 00:02:52.747 00:02:52.747 apps: 00:02:52.747 dumpcap: explicitly disabled via build config 00:02:52.747 graph: explicitly disabled via build config 00:02:52.747 pdump: explicitly disabled via build config 00:02:52.747 proc-info: explicitly disabled via build config 00:02:52.747 test-acl: explicitly disabled via build config 00:02:52.748 test-bbdev: explicitly disabled via build config 00:02:52.748 test-cmdline: explicitly disabled via build config 00:02:52.748 test-compress-perf: explicitly disabled via build config 00:02:52.748 test-crypto-perf: explicitly disabled via build config 00:02:52.748 test-dma-perf: explicitly disabled via build config 00:02:52.748 test-eventdev: explicitly disabled via build config 00:02:52.748 test-fib: explicitly disabled via build config 00:02:52.748 test-flow-perf: explicitly disabled via build config 00:02:52.748 test-gpudev: explicitly disabled via build config 00:02:52.748 test-mldev: explicitly disabled via build config 00:02:52.748 test-pipeline: explicitly disabled via build config 00:02:52.748 test-pmd: explicitly disabled via build config 00:02:52.748 test-regex: explicitly disabled via build config 00:02:52.748 test-sad: explicitly disabled via build config 00:02:52.748 test-security-perf: explicitly disabled via build config 00:02:52.748 00:02:52.748 libs: 00:02:52.748 argparse: explicitly disabled via build config 00:02:52.748 metrics: explicitly disabled via build config 00:02:52.748 acl: explicitly disabled via build config 00:02:52.748 bbdev: explicitly disabled via build config 00:02:52.748 bitratestats: explicitly disabled via build config 00:02:52.748 bpf: explicitly disabled via build config 00:02:52.748 cfgfile: explicitly disabled via build config 00:02:52.748 distributor: explicitly disabled via build config 00:02:52.748 efd: explicitly disabled via build config 00:02:52.748 eventdev: explicitly disabled via build config 00:02:52.748 dispatcher: explicitly disabled via build config 00:02:52.748 gpudev: explicitly disabled via build config 00:02:52.748 gro: explicitly disabled via build config 00:02:52.748 gso: explicitly disabled via build config 00:02:52.748 ip_frag: explicitly disabled via build config 00:02:52.748 jobstats: explicitly disabled via build config 00:02:52.748 latencystats: explicitly disabled via build config 00:02:52.748 lpm: explicitly disabled via build config 00:02:52.748 member: explicitly disabled via build config 00:02:52.748 pcapng: explicitly disabled via build config 00:02:52.748 power: only supported on Linux 00:02:52.748 rawdev: explicitly disabled via build config 00:02:52.748 regexdev: explicitly disabled via build config 00:02:52.748 mldev: explicitly disabled via build config 00:02:52.748 rib: explicitly disabled via build config 00:02:52.748 sched: explicitly disabled via build config 00:02:52.748 stack: explicitly disabled via build config 00:02:52.748 vhost: only supported on Linux 00:02:52.748 ipsec: explicitly disabled via build config 00:02:52.748 pdcp: explicitly disabled via build config 00:02:52.748 fib: explicitly disabled via build config 00:02:52.748 port: explicitly disabled via build config 00:02:52.748 pdump: explicitly disabled via build config 00:02:52.748 table: explicitly disabled via build config 00:02:52.748 pipeline: explicitly disabled via build config 00:02:52.748 graph: explicitly disabled via build config 00:02:52.748 node: explicitly disabled via build config 00:02:52.748 00:02:52.748 drivers: 00:02:52.748 common/cpt: not in enabled drivers build config 00:02:52.748 common/dpaax: not in enabled drivers build config 00:02:52.748 common/iavf: not in enabled drivers build config 00:02:52.748 common/idpf: not in enabled drivers build config 00:02:52.748 common/ionic: not in enabled drivers build config 00:02:52.748 common/mvep: not in enabled drivers build config 00:02:52.748 common/octeontx: not in enabled drivers build config 00:02:52.748 bus/auxiliary: not in enabled drivers build config 00:02:52.748 bus/cdx: not in enabled drivers build config 00:02:52.748 bus/dpaa: not in enabled drivers build config 00:02:52.748 bus/fslmc: not in enabled drivers build config 00:02:52.748 bus/ifpga: not in enabled drivers build config 00:02:52.748 bus/platform: not in enabled drivers build config 00:02:52.748 bus/uacce: not in enabled drivers build config 00:02:52.748 bus/vmbus: not in enabled drivers build config 00:02:52.748 common/cnxk: not in enabled drivers build config 00:02:52.748 common/mlx5: not in enabled drivers build config 00:02:52.748 common/nfp: not in enabled drivers build config 00:02:52.748 common/nitrox: not in enabled drivers build config 00:02:52.748 common/qat: not in enabled drivers build config 00:02:52.748 common/sfc_efx: not in enabled drivers build config 00:02:52.748 mempool/bucket: not in enabled drivers build config 00:02:52.748 mempool/cnxk: not in enabled drivers build config 00:02:52.748 mempool/dpaa: not in enabled drivers build config 00:02:52.748 mempool/dpaa2: not in enabled drivers build config 00:02:52.748 mempool/octeontx: not in enabled drivers build config 00:02:52.748 mempool/stack: not in enabled drivers build config 00:02:52.748 dma/cnxk: not in enabled drivers build config 00:02:52.748 dma/dpaa: not in enabled drivers build config 00:02:52.748 dma/dpaa2: not in enabled drivers build config 00:02:52.748 dma/hisilicon: not in enabled drivers build config 00:02:52.748 dma/idxd: not in enabled drivers build config 00:02:52.748 dma/ioat: not in enabled drivers build config 00:02:52.748 dma/skeleton: not in enabled drivers build config 00:02:52.748 net/af_packet: not in enabled drivers build config 00:02:52.748 net/af_xdp: not in enabled drivers build config 00:02:52.748 net/ark: not in enabled drivers build config 00:02:52.748 net/atlantic: not in enabled drivers build config 00:02:52.748 net/avp: not in enabled drivers build config 00:02:52.748 net/axgbe: not in enabled drivers build config 00:02:52.748 net/bnx2x: not in enabled drivers build config 00:02:52.748 net/bnxt: not in enabled drivers build config 00:02:52.748 net/bonding: not in enabled drivers build config 00:02:52.748 net/cnxk: not in enabled drivers build config 00:02:52.748 net/cpfl: not in enabled drivers build config 00:02:52.748 net/cxgbe: not in enabled drivers build config 00:02:52.748 net/dpaa: not in enabled drivers build config 00:02:52.748 net/dpaa2: not in enabled drivers build config 00:02:52.748 net/e1000: not in enabled drivers build config 00:02:52.748 net/ena: not in enabled drivers build config 00:02:52.748 net/enetc: not in enabled drivers build config 00:02:52.748 net/enetfec: not in enabled drivers build config 00:02:52.748 net/enic: not in enabled drivers build config 00:02:52.748 net/failsafe: not in enabled drivers build config 00:02:52.748 net/fm10k: not in enabled drivers build config 00:02:52.748 net/gve: not in enabled drivers build config 00:02:52.748 net/hinic: not in enabled drivers build config 00:02:52.748 net/hns3: not in enabled drivers build config 00:02:52.748 net/i40e: not in enabled drivers build config 00:02:52.748 net/iavf: not in enabled drivers build config 00:02:52.748 net/ice: not in enabled drivers build config 00:02:52.748 net/idpf: not in enabled drivers build config 00:02:52.748 net/igc: not in enabled drivers build config 00:02:52.748 net/ionic: not in enabled drivers build config 00:02:52.748 net/ipn3ke: not in enabled drivers build config 00:02:52.748 net/ixgbe: not in enabled drivers build config 00:02:52.748 net/mana: not in enabled drivers build config 00:02:52.748 net/memif: not in enabled drivers build config 00:02:52.748 net/mlx4: not in enabled drivers build config 00:02:52.748 net/mlx5: not in enabled drivers build config 00:02:52.748 net/mvneta: not in enabled drivers build config 00:02:52.748 net/mvpp2: not in enabled drivers build config 00:02:52.748 net/netvsc: not in enabled drivers build config 00:02:52.748 net/nfb: not in enabled drivers build config 00:02:52.748 net/nfp: not in enabled drivers build config 00:02:52.748 net/ngbe: not in enabled drivers build config 00:02:52.748 net/null: not in enabled drivers build config 00:02:52.748 net/octeontx: not in enabled drivers build config 00:02:52.748 net/octeon_ep: not in enabled drivers build config 00:02:52.748 net/pcap: not in enabled drivers build config 00:02:52.748 net/pfe: not in enabled drivers build config 00:02:52.748 net/qede: not in enabled drivers build config 00:02:52.748 net/ring: not in enabled drivers build config 00:02:52.748 net/sfc: not in enabled drivers build config 00:02:52.748 net/softnic: not in enabled drivers build config 00:02:52.748 net/tap: not in enabled drivers build config 00:02:52.748 net/thunderx: not in enabled drivers build config 00:02:52.748 net/txgbe: not in enabled drivers build config 00:02:52.748 net/vdev_netvsc: not in enabled drivers build config 00:02:52.748 net/vhost: not in enabled drivers build config 00:02:52.748 net/virtio: not in enabled drivers build config 00:02:52.748 net/vmxnet3: not in enabled drivers build config 00:02:52.748 raw/*: missing internal dependency, "rawdev" 00:02:52.748 crypto/armv8: not in enabled drivers build config 00:02:52.748 crypto/bcmfs: not in enabled drivers build config 00:02:52.748 crypto/caam_jr: not in enabled drivers build config 00:02:52.748 crypto/ccp: not in enabled drivers build config 00:02:52.748 crypto/cnxk: not in enabled drivers build config 00:02:52.748 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.748 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.748 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.748 crypto/mlx5: not in enabled drivers build config 00:02:52.748 crypto/mvsam: not in enabled drivers build config 00:02:52.748 crypto/nitrox: not in enabled drivers build config 00:02:52.748 crypto/null: not in enabled drivers build config 00:02:52.748 crypto/octeontx: not in enabled drivers build config 00:02:52.748 crypto/openssl: not in enabled drivers build config 00:02:52.748 crypto/scheduler: not in enabled drivers build config 00:02:52.748 crypto/uadk: not in enabled drivers build config 00:02:52.748 crypto/virtio: not in enabled drivers build config 00:02:52.748 compress/isal: not in enabled drivers build config 00:02:52.748 compress/mlx5: not in enabled drivers build config 00:02:52.748 compress/nitrox: not in enabled drivers build config 00:02:52.748 compress/octeontx: not in enabled drivers build config 00:02:52.749 compress/zlib: not in enabled drivers build config 00:02:52.749 regex/*: missing internal dependency, "regexdev" 00:02:52.749 ml/*: missing internal dependency, "mldev" 00:02:52.749 vdpa/*: missing internal dependency, "vhost" 00:02:52.749 event/*: missing internal dependency, "eventdev" 00:02:52.749 baseband/*: missing internal dependency, "bbdev" 00:02:52.749 gpu/*: missing internal dependency, "gpudev" 00:02:52.749 00:02:52.749 00:02:52.749 Build targets in project: 81 00:02:52.749 00:02:52.749 DPDK 24.03.0 00:02:52.749 00:02:52.749 User defined options 00:02:52.749 buildtype : debug 00:02:52.749 default_library : static 00:02:52.749 libdir : lib 00:02:52.749 prefix : / 00:02:52.749 c_args : -fPIC -Werror 00:02:52.749 c_link_args : 00:02:52.749 cpu_instruction_set: native 00:02:52.749 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.749 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.749 enable_docs : false 00:02:52.749 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.749 enable_kmods : true 00:02:52.749 max_lcores : 128 00:02:52.749 tests : false 00:02:52.749 00:02:52.749 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:52.749 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.749 [1/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:52.749 [2/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.749 [3/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.749 [4/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.749 [5/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.749 [6/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.749 [7/233] Linking static target lib/librte_kvargs.a 00:02:52.749 [8/233] Linking static target lib/librte_log.a 00:02:53.007 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.007 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.007 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.007 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.007 [13/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.007 [14/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.007 [15/233] Linking static target lib/librte_telemetry.a 00:02:53.007 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.007 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.007 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.266 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.266 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.266 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.266 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.266 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.266 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.266 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.538 [26/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.538 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.538 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.538 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.538 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.538 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.538 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.538 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.538 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.800 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.800 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.800 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.800 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.800 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.800 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.800 [41/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.800 [42/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.800 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.058 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.058 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.058 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.058 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.058 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.058 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.058 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.317 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:54.317 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.317 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.317 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.317 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.317 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.317 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.317 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.575 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:54.575 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:54.575 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.575 [62/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:54.575 [63/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:54.575 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.575 [65/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.575 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:54.575 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:54.575 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:54.833 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:54.833 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:54.833 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:54.833 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:54.833 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:54.833 [74/233] Linking static target lib/librte_eal.a 00:02:55.091 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.091 [76/233] Linking static target lib/librte_ring.a 00:02:55.091 [77/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.091 [78/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.091 [79/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.091 [80/233] Linking static target lib/librte_rcu.a 00:02:55.091 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.092 [82/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:55.092 [83/233] Linking static target lib/librte_mempool.a 00:02:55.092 [84/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.092 [85/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:55.350 [86/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.350 [87/233] Linking target lib/librte_log.so.24.1 00:02:55.350 [88/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:55.350 [89/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.350 [90/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:55.350 [91/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:55.350 [92/233] Linking target lib/librte_kvargs.so.24.1 00:02:55.350 [93/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.350 [94/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:55.608 [95/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.608 [96/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:55.608 [97/233] Linking static target lib/librte_mbuf.a 00:02:55.608 [98/233] Linking target lib/librte_telemetry.so.24.1 00:02:55.608 [99/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:55.608 [100/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.608 [101/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.608 [102/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.608 [103/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:55.608 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.608 [105/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.608 [106/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.608 [107/233] Linking static target lib/librte_net.a 00:02:55.608 [108/233] Linking static target lib/librte_meter.a 00:02:55.866 [109/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.866 [110/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.866 [111/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:56.124 [112/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.124 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:56.124 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:56.124 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.124 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.383 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.383 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.383 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.383 [120/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.383 [121/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.383 [122/233] Linking static target lib/librte_pci.a 00:02:56.383 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.383 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.706 [125/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.706 [126/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.706 [127/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.706 [128/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.706 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.706 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.706 [131/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.706 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.706 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.706 [134/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.706 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.706 [136/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:56.706 [137/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:56.706 [138/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.995 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.995 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:56.995 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:56.995 [142/233] Linking static target lib/librte_ethdev.a 00:02:56.995 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:56.995 [144/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:56.995 [145/233] Linking static target lib/librte_cmdline.a 00:02:56.995 [146/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:56.995 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:56.995 [148/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.253 [149/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.253 [150/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.253 [151/233] Linking static target lib/librte_timer.a 00:02:57.253 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.253 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.253 [154/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.253 [155/233] Linking static target lib/librte_hash.a 00:02:57.511 [156/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.511 [157/233] Linking static target lib/librte_compressdev.a 00:02:57.511 [158/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.511 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.511 [160/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.511 [161/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.511 [162/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:57.511 [163/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:57.511 [164/233] Linking static target lib/librte_dmadev.a 00:02:57.769 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.769 [166/233] Linking static target lib/librte_reorder.a 00:02:57.769 [167/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.769 [168/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.769 [169/233] Linking static target lib/librte_cryptodev.a 00:02:58.027 [170/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.027 [171/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.027 [172/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.027 [173/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.027 [174/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.027 [175/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:58.027 [176/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.027 [177/233] Linking static target lib/librte_security.a 00:02:58.027 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:58.027 [179/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.027 [180/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.285 [181/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.285 [182/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.285 [183/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.285 [184/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.285 [185/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.285 [186/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.285 [187/233] Linking static target drivers/librte_bus_pci.a 00:02:58.545 [188/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.545 [189/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.545 [190/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.545 [191/233] Linking static target drivers/librte_bus_vdev.a 00:02:58.545 [192/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.545 [193/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.545 [194/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.545 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.545 [196/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.545 [197/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.804 [198/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.804 [199/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.804 [200/233] Linking static target drivers/librte_mempool_ring.a 00:03:00.711 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:03:00.711 machine -> /usr/src/sys/amd64/include 00:03:00.711 x86 -> /usr/src/sys/x86/include 00:03:00.711 i386 -> /usr/src/sys/i386/include 00:03:00.711 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:03:00.711 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:03:00.711 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:03:00.711 touch opt_global.h 00:03:00.711 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:03:00.711 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:03:00.711 :> export_syms 00:03:00.711 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:03:00.711 objcopy --strip-debug contigmem.ko 00:03:00.971 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:03:00.971 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:03:00.971 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:03:00.971 :> export_syms 00:03:00.971 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:03:00.971 objcopy --strip-debug nic_uio.ko 00:03:05.188 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.480 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.480 [205/233] Linking target lib/librte_eal.so.24.1 00:03:08.480 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:08.480 [207/233] Linking target drivers/librte_bus_vdev.so.24.1 00:03:08.480 [208/233] Linking target lib/librte_ring.so.24.1 00:03:08.480 [209/233] Linking target lib/librte_pci.so.24.1 00:03:08.480 [210/233] Linking target lib/librte_dmadev.so.24.1 00:03:08.480 [211/233] Linking target lib/librte_timer.so.24.1 00:03:08.480 [212/233] Linking target lib/librte_meter.so.24.1 00:03:08.740 [213/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:08.740 [214/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:08.740 [215/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:08.740 [216/233] Linking target lib/librte_rcu.so.24.1 00:03:08.740 [217/233] Linking target drivers/librte_bus_pci.so.24.1 00:03:08.740 [218/233] Linking target lib/librte_mempool.so.24.1 00:03:08.740 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:09.000 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:09.000 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:03:09.000 [222/233] Linking target lib/librte_mbuf.so.24.1 00:03:09.000 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:09.000 [224/233] Linking target lib/librte_net.so.24.1 00:03:09.000 [225/233] Linking target lib/librte_cryptodev.so.24.1 00:03:09.000 [226/233] Linking target lib/librte_compressdev.so.24.1 00:03:09.000 [227/233] Linking target lib/librte_reorder.so.24.1 00:03:09.259 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:09.259 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:09.259 [230/233] Linking target lib/librte_cmdline.so.24.1 00:03:09.259 [231/233] Linking target lib/librte_hash.so.24.1 00:03:09.259 [232/233] Linking target lib/librte_security.so.24.1 00:03:09.259 [233/233] Linking target lib/librte_ethdev.so.24.1 00:03:09.259 INFO: autodetecting backend as ninja 00:03:09.259 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:10.196 CC lib/log/log_deprecated.o 00:03:10.196 CC lib/log/log.o 00:03:10.196 CC lib/log/log_flags.o 00:03:10.196 CC lib/ut/ut.o 00:03:10.196 CC lib/ut_mock/mock.o 00:03:10.196 LIB libspdk_ut_mock.a 00:03:10.196 LIB libspdk_log.a 00:03:10.196 LIB libspdk_ut.a 00:03:10.196 CXX lib/trace_parser/trace.o 00:03:10.196 CC lib/ioat/ioat.o 00:03:10.196 CC lib/util/base64.o 00:03:10.196 CC lib/util/bit_array.o 00:03:10.196 CC lib/util/cpuset.o 00:03:10.196 CC lib/util/crc16.o 00:03:10.196 CC lib/util/crc32.o 00:03:10.196 CC lib/util/crc32c.o 00:03:10.196 CC lib/util/crc32_ieee.o 00:03:10.196 CC lib/dma/dma.o 00:03:10.455 CC lib/util/crc64.o 00:03:10.455 CC lib/util/dif.o 00:03:10.455 CC lib/util/fd.o 00:03:10.455 CC lib/util/fd_group.o 00:03:10.455 CC lib/util/file.o 00:03:10.455 CC lib/util/hexlify.o 00:03:10.455 CC lib/util/iov.o 00:03:10.455 LIB libspdk_dma.a 00:03:10.455 LIB libspdk_ioat.a 00:03:10.455 CC lib/util/math.o 00:03:10.455 CC lib/util/net.o 00:03:10.455 CC lib/util/pipe.o 00:03:10.455 CC lib/util/strerror_tls.o 00:03:10.455 CC lib/util/string.o 00:03:10.455 CC lib/util/uuid.o 00:03:10.455 CC lib/util/xor.o 00:03:10.455 CC lib/util/zipf.o 00:03:10.455 LIB libspdk_util.a 00:03:10.713 CC lib/json/json_parse.o 00:03:10.713 CC lib/json/json_util.o 00:03:10.713 CC lib/env_dpdk/memory.o 00:03:10.713 CC lib/env_dpdk/env.o 00:03:10.713 CC lib/conf/conf.o 00:03:10.713 CC lib/idxd/idxd.o 00:03:10.713 CC lib/vmd/vmd.o 00:03:10.713 CC lib/rdma_utils/rdma_utils.o 00:03:10.713 CC lib/rdma_provider/common.o 00:03:10.713 LIB libspdk_conf.a 00:03:10.713 CC lib/idxd/idxd_user.o 00:03:10.713 CC lib/json/json_write.o 00:03:10.713 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:10.713 CC lib/env_dpdk/pci.o 00:03:10.713 LIB libspdk_rdma_utils.a 00:03:10.713 CC lib/vmd/led.o 00:03:10.713 CC lib/env_dpdk/init.o 00:03:10.713 CC lib/env_dpdk/threads.o 00:03:10.713 LIB libspdk_rdma_provider.a 00:03:10.713 LIB libspdk_idxd.a 00:03:10.713 CC lib/env_dpdk/pci_ioat.o 00:03:10.713 CC lib/env_dpdk/pci_virtio.o 00:03:10.972 LIB libspdk_json.a 00:03:10.972 LIB libspdk_vmd.a 00:03:10.972 CC lib/env_dpdk/pci_vmd.o 00:03:10.972 CC lib/env_dpdk/pci_idxd.o 00:03:10.972 CC lib/env_dpdk/pci_event.o 00:03:10.972 CC lib/env_dpdk/sigbus_handler.o 00:03:10.972 CC lib/env_dpdk/pci_dpdk.o 00:03:10.972 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:10.972 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:10.972 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.972 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.972 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.972 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.972 LIB libspdk_jsonrpc.a 00:03:11.230 CC lib/rpc/rpc.o 00:03:11.230 LIB libspdk_env_dpdk.a 00:03:11.230 LIB libspdk_rpc.a 00:03:11.490 CC lib/trace/trace.o 00:03:11.490 CC lib/trace/trace_flags.o 00:03:11.490 CC lib/trace/trace_rpc.o 00:03:11.490 CC lib/keyring/keyring_rpc.o 00:03:11.490 CC lib/keyring/keyring.o 00:03:11.490 CC lib/notify/notify.o 00:03:11.490 CC lib/notify/notify_rpc.o 00:03:11.490 LIB libspdk_keyring.a 00:03:11.490 LIB libspdk_notify.a 00:03:11.490 LIB libspdk_trace.a 00:03:11.749 LIB libspdk_trace_parser.a 00:03:11.749 CC lib/sock/sock.o 00:03:11.749 CC lib/sock/sock_rpc.o 00:03:11.749 CC lib/thread/thread.o 00:03:11.749 CC lib/thread/iobuf.o 00:03:11.749 LIB libspdk_sock.a 00:03:12.007 LIB libspdk_thread.a 00:03:12.007 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:12.007 CC lib/nvme/nvme_fabric.o 00:03:12.007 CC lib/nvme/nvme_ctrlr.o 00:03:12.007 CC lib/nvme/nvme_ns_cmd.o 00:03:12.007 CC lib/nvme/nvme_ns.o 00:03:12.007 CC lib/nvme/nvme_pcie_common.o 00:03:12.007 CC lib/nvme/nvme_pcie.o 00:03:12.007 CC lib/blob/blobstore.o 00:03:12.007 CC lib/init/json_config.o 00:03:12.007 CC lib/accel/accel.o 00:03:12.007 CC lib/init/subsystem.o 00:03:12.265 CC lib/init/subsystem_rpc.o 00:03:12.265 CC lib/accel/accel_rpc.o 00:03:12.265 CC lib/init/rpc.o 00:03:12.265 CC lib/accel/accel_sw.o 00:03:12.265 LIB libspdk_init.a 00:03:12.265 CC lib/nvme/nvme_qpair.o 00:03:12.265 CC lib/blob/request.o 00:03:12.523 LIB libspdk_accel.a 00:03:12.523 CC lib/blob/zeroes.o 00:03:12.523 CC lib/blob/blob_bs_dev.o 00:03:12.523 CC lib/nvme/nvme.o 00:03:12.523 CC lib/nvme/nvme_quirks.o 00:03:12.523 CC lib/event/app.o 00:03:12.523 CC lib/nvme/nvme_transport.o 00:03:12.523 CC lib/bdev/bdev.o 00:03:12.523 CC lib/nvme/nvme_discovery.o 00:03:12.524 CC lib/event/reactor.o 00:03:12.524 LIB libspdk_blob.a 00:03:12.524 CC lib/bdev/bdev_rpc.o 00:03:12.524 CC lib/event/log_rpc.o 00:03:12.524 CC lib/blobfs/blobfs.o 00:03:12.782 CC lib/blobfs/tree.o 00:03:12.782 CC lib/lvol/lvol.o 00:03:12.782 CC lib/event/app_rpc.o 00:03:12.782 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:12.782 CC lib/bdev/bdev_zone.o 00:03:12.782 CC lib/event/scheduler_static.o 00:03:12.782 LIB libspdk_blobfs.a 00:03:12.782 CC lib/bdev/part.o 00:03:12.782 CC lib/bdev/scsi_nvme.o 00:03:12.782 LIB libspdk_event.a 00:03:12.782 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:12.782 CC lib/nvme/nvme_tcp.o 00:03:12.782 CC lib/nvme/nvme_opal.o 00:03:12.782 CC lib/nvme/nvme_io_msg.o 00:03:12.782 LIB libspdk_lvol.a 00:03:12.782 CC lib/nvme/nvme_poll_group.o 00:03:12.782 CC lib/nvme/nvme_zns.o 00:03:13.040 CC lib/nvme/nvme_stubs.o 00:03:13.040 CC lib/nvme/nvme_auth.o 00:03:13.040 LIB libspdk_bdev.a 00:03:13.040 CC lib/nvme/nvme_rdma.o 00:03:13.040 CC lib/scsi/dev.o 00:03:13.040 CC lib/scsi/lun.o 00:03:13.299 CC lib/scsi/port.o 00:03:13.299 CC lib/scsi/scsi.o 00:03:13.299 CC lib/scsi/scsi_bdev.o 00:03:13.299 CC lib/scsi/scsi_pr.o 00:03:13.299 CC lib/scsi/scsi_rpc.o 00:03:13.299 CC lib/scsi/task.o 00:03:13.299 LIB libspdk_scsi.a 00:03:13.557 LIB libspdk_nvme.a 00:03:13.557 CC lib/iscsi/init_grp.o 00:03:13.557 CC lib/iscsi/conn.o 00:03:13.557 CC lib/iscsi/iscsi.o 00:03:13.557 CC lib/iscsi/md5.o 00:03:13.557 CC lib/iscsi/param.o 00:03:13.557 CC lib/iscsi/portal_grp.o 00:03:13.557 CC lib/iscsi/tgt_node.o 00:03:13.557 CC lib/iscsi/iscsi_subsystem.o 00:03:13.557 CC lib/iscsi/iscsi_rpc.o 00:03:13.557 CC lib/nvmf/ctrlr.o 00:03:13.815 CC lib/nvmf/ctrlr_discovery.o 00:03:13.815 CC lib/nvmf/ctrlr_bdev.o 00:03:13.815 CC lib/nvmf/subsystem.o 00:03:13.815 CC lib/nvmf/nvmf.o 00:03:13.815 CC lib/nvmf/nvmf_rpc.o 00:03:13.815 CC lib/nvmf/transport.o 00:03:13.815 CC lib/nvmf/tcp.o 00:03:13.815 CC lib/nvmf/stubs.o 00:03:13.815 CC lib/iscsi/task.o 00:03:13.815 CC lib/nvmf/mdns_server.o 00:03:13.815 CC lib/nvmf/rdma.o 00:03:13.815 CC lib/nvmf/auth.o 00:03:13.815 LIB libspdk_iscsi.a 00:03:14.074 LIB libspdk_nvmf.a 00:03:14.331 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.331 CC module/keyring/file/keyring.o 00:03:14.331 CC module/keyring/file/keyring_rpc.o 00:03:14.331 CC module/accel/ioat/accel_ioat.o 00:03:14.331 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.331 CC module/sock/posix/posix.o 00:03:14.331 CC module/accel/dsa/accel_dsa.o 00:03:14.331 CC module/accel/iaa/accel_iaa.o 00:03:14.331 CC module/accel/error/accel_error.o 00:03:14.331 CC module/blob/bdev/blob_bdev.o 00:03:14.331 LIB libspdk_env_dpdk_rpc.a 00:03:14.331 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.589 CC module/accel/error/accel_error_rpc.o 00:03:14.589 LIB libspdk_keyring_file.a 00:03:14.589 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.589 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.589 LIB libspdk_scheduler_dynamic.a 00:03:14.589 LIB libspdk_accel_dsa.a 00:03:14.589 LIB libspdk_blob_bdev.a 00:03:14.589 LIB libspdk_accel_error.a 00:03:14.589 LIB libspdk_accel_ioat.a 00:03:14.589 LIB libspdk_accel_iaa.a 00:03:14.589 LIB libspdk_sock_posix.a 00:03:14.589 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.589 CC module/bdev/delay/vbdev_delay.o 00:03:14.589 CC module/bdev/error/vbdev_error.o 00:03:14.589 CC module/bdev/malloc/bdev_malloc.o 00:03:14.589 CC module/bdev/nvme/bdev_nvme.o 00:03:14.589 CC module/bdev/gpt/gpt.o 00:03:14.589 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.589 CC module/bdev/null/bdev_null.o 00:03:14.589 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.589 CC module/bdev/raid/bdev_raid.o 00:03:14.847 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.847 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.847 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.847 CC module/bdev/null/bdev_null_rpc.o 00:03:14.847 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.847 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.847 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.847 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.847 LIB libspdk_blobfs_bdev.a 00:03:14.847 LIB libspdk_bdev_error.a 00:03:14.847 LIB libspdk_bdev_gpt.a 00:03:14.847 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.847 LIB libspdk_bdev_null.a 00:03:14.847 LIB libspdk_bdev_passthru.a 00:03:14.847 CC module/bdev/nvme/nvme_rpc.o 00:03:14.847 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.847 LIB libspdk_bdev_malloc.a 00:03:14.847 LIB libspdk_bdev_delay.a 00:03:14.847 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.847 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.847 CC module/bdev/raid/raid0.o 00:03:14.847 CC module/bdev/split/vbdev_split.o 00:03:14.847 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.105 CC module/bdev/raid/raid1.o 00:03:15.105 LIB libspdk_bdev_lvol.a 00:03:15.105 CC module/bdev/raid/concat.o 00:03:15.105 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.105 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.105 LIB libspdk_bdev_split.a 00:03:15.105 CC module/bdev/aio/bdev_aio.o 00:03:15.105 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.105 LIB libspdk_bdev_raid.a 00:03:15.105 LIB libspdk_bdev_nvme.a 00:03:15.105 LIB libspdk_bdev_zone_block.a 00:03:15.105 LIB libspdk_bdev_aio.a 00:03:15.671 CC module/event/subsystems/vmd/vmd.o 00:03:15.671 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.671 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.671 CC module/event/subsystems/keyring/keyring.o 00:03:15.671 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.671 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.671 CC module/event/subsystems/sock/sock.o 00:03:15.671 LIB libspdk_event_vmd.a 00:03:15.671 LIB libspdk_event_keyring.a 00:03:15.671 LIB libspdk_event_scheduler.a 00:03:15.671 LIB libspdk_event_sock.a 00:03:15.671 LIB libspdk_event_iobuf.a 00:03:15.671 CC module/event/subsystems/accel/accel.o 00:03:15.929 LIB libspdk_event_accel.a 00:03:15.929 CC module/event/subsystems/bdev/bdev.o 00:03:16.188 LIB libspdk_event_bdev.a 00:03:16.446 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.446 CC module/event/subsystems/scsi/scsi.o 00:03:16.446 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.446 LIB libspdk_event_scsi.a 00:03:16.446 LIB libspdk_event_nvmf.a 00:03:16.704 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.704 LIB libspdk_event_iscsi.a 00:03:16.966 CXX app/trace/trace.o 00:03:16.966 CC app/spdk_lspci/spdk_lspci.o 00:03:16.966 CC app/trace_record/trace_record.o 00:03:16.966 CC examples/util/zipf/zipf.o 00:03:16.966 CC examples/ioat/perf/perf.o 00:03:16.966 CC app/nvmf_tgt/nvmf_main.o 00:03:16.966 CC test/thread/poller_perf/poller_perf.o 00:03:16.966 CC test/dma/test_dma/test_dma.o 00:03:16.966 LINK spdk_lspci 00:03:16.966 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.966 LINK zipf 00:03:16.966 CC app/spdk_tgt/spdk_tgt.o 00:03:16.966 LINK spdk_trace_record 00:03:16.966 LINK ioat_perf 00:03:16.966 LINK poller_perf 00:03:16.966 LINK nvmf_tgt 00:03:16.966 CC examples/ioat/verify/verify.o 00:03:16.966 LINK iscsi_tgt 00:03:16.966 LINK spdk_tgt 00:03:16.966 TEST_HEADER include/spdk/accel.h 00:03:16.966 TEST_HEADER include/spdk/accel_module.h 00:03:16.966 TEST_HEADER include/spdk/assert.h 00:03:16.966 TEST_HEADER include/spdk/barrier.h 00:03:16.966 TEST_HEADER include/spdk/base64.h 00:03:16.966 TEST_HEADER include/spdk/bdev.h 00:03:16.966 CC test/thread/lock/spdk_lock.o 00:03:16.966 TEST_HEADER include/spdk/bdev_module.h 00:03:16.966 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.966 TEST_HEADER include/spdk/bit_array.h 00:03:16.966 TEST_HEADER include/spdk/bit_pool.h 00:03:16.966 TEST_HEADER include/spdk/blob.h 00:03:16.966 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.966 TEST_HEADER include/spdk/blobfs.h 00:03:16.966 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.966 TEST_HEADER include/spdk/conf.h 00:03:16.966 TEST_HEADER include/spdk/config.h 00:03:16.966 TEST_HEADER include/spdk/cpuset.h 00:03:16.966 TEST_HEADER include/spdk/crc16.h 00:03:16.966 TEST_HEADER include/spdk/crc32.h 00:03:16.966 TEST_HEADER include/spdk/crc64.h 00:03:16.966 LINK test_dma 00:03:16.966 TEST_HEADER include/spdk/dif.h 00:03:16.966 TEST_HEADER include/spdk/dma.h 00:03:16.966 TEST_HEADER include/spdk/endian.h 00:03:16.966 CC test/app/bdev_svc/bdev_svc.o 00:03:16.966 TEST_HEADER include/spdk/env.h 00:03:16.966 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.966 TEST_HEADER include/spdk/event.h 00:03:16.966 TEST_HEADER include/spdk/fd.h 00:03:16.966 TEST_HEADER include/spdk/fd_group.h 00:03:16.966 TEST_HEADER include/spdk/file.h 00:03:16.966 TEST_HEADER include/spdk/ftl.h 00:03:16.966 CC examples/thread/thread/thread_ex.o 00:03:17.224 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.224 TEST_HEADER include/spdk/hexlify.h 00:03:17.224 TEST_HEADER include/spdk/histogram_data.h 00:03:17.224 TEST_HEADER include/spdk/idxd.h 00:03:17.224 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.224 TEST_HEADER include/spdk/init.h 00:03:17.224 TEST_HEADER include/spdk/ioat.h 00:03:17.224 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.224 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.224 TEST_HEADER include/spdk/json.h 00:03:17.224 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.224 TEST_HEADER include/spdk/keyring.h 00:03:17.224 TEST_HEADER include/spdk/keyring_module.h 00:03:17.224 TEST_HEADER include/spdk/likely.h 00:03:17.224 LINK verify 00:03:17.224 TEST_HEADER include/spdk/log.h 00:03:17.224 TEST_HEADER include/spdk/lvol.h 00:03:17.224 TEST_HEADER include/spdk/memory.h 00:03:17.224 TEST_HEADER include/spdk/mmio.h 00:03:17.224 TEST_HEADER include/spdk/nbd.h 00:03:17.224 TEST_HEADER include/spdk/net.h 00:03:17.224 TEST_HEADER include/spdk/notify.h 00:03:17.224 TEST_HEADER include/spdk/nvme.h 00:03:17.224 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.224 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.224 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.224 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.224 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.224 TEST_HEADER include/spdk/nvmf.h 00:03:17.224 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.224 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.224 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.224 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.224 TEST_HEADER include/spdk/opal.h 00:03:17.224 TEST_HEADER include/spdk/opal_spec.h 00:03:17.224 TEST_HEADER include/spdk/pci_ids.h 00:03:17.224 TEST_HEADER include/spdk/pipe.h 00:03:17.224 TEST_HEADER include/spdk/queue.h 00:03:17.224 TEST_HEADER include/spdk/reduce.h 00:03:17.224 TEST_HEADER include/spdk/rpc.h 00:03:17.224 TEST_HEADER include/spdk/scheduler.h 00:03:17.224 TEST_HEADER include/spdk/scsi.h 00:03:17.224 LINK bdev_svc 00:03:17.224 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.224 CC test/rpc_client/rpc_client_test.o 00:03:17.224 TEST_HEADER include/spdk/sock.h 00:03:17.224 TEST_HEADER include/spdk/stdinc.h 00:03:17.224 TEST_HEADER include/spdk/string.h 00:03:17.224 TEST_HEADER include/spdk/thread.h 00:03:17.224 TEST_HEADER include/spdk/trace.h 00:03:17.224 TEST_HEADER include/spdk/trace_parser.h 00:03:17.224 TEST_HEADER include/spdk/tree.h 00:03:17.224 TEST_HEADER include/spdk/ublk.h 00:03:17.224 TEST_HEADER include/spdk/util.h 00:03:17.224 TEST_HEADER include/spdk/uuid.h 00:03:17.224 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.224 LINK thread 00:03:17.224 CC test/app/histogram_perf/histogram_perf.o 00:03:17.224 TEST_HEADER include/spdk/version.h 00:03:17.224 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.224 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.224 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.224 TEST_HEADER include/spdk/vhost.h 00:03:17.224 TEST_HEADER include/spdk/vmd.h 00:03:17.224 TEST_HEADER include/spdk/xor.h 00:03:17.224 TEST_HEADER include/spdk/zipf.h 00:03:17.224 CXX test/cpp_headers/accel.o 00:03:17.224 LINK rpc_client_test 00:03:17.224 CC test/app/jsoncat/jsoncat.o 00:03:17.224 LINK histogram_perf 00:03:17.224 LINK spdk_lock 00:03:17.224 CC test/app/stub/stub.o 00:03:17.482 LINK jsoncat 00:03:17.482 LINK nvme_fuzz 00:03:17.482 CXX test/cpp_headers/accel_module.o 00:03:17.482 CC examples/sock/hello_world/hello_sock.o 00:03:17.482 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:17.482 LINK stub 00:03:17.482 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.482 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.482 LINK hello_sock 00:03:17.482 CC test/nvme/aer/aer.o 00:03:17.482 LINK histogram_ut 00:03:17.482 CXX test/cpp_headers/assert.o 00:03:17.482 CC test/unit/lib/log/log.c/log_ut.o 00:03:17.482 LINK lsvmd 00:03:17.482 CXX test/cpp_headers/barrier.o 00:03:17.482 LINK spdk_trace 00:03:17.482 CC app/spdk_nvme_perf/perf.o 00:03:17.740 LINK aer 00:03:17.740 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:17.740 LINK mem_callbacks 00:03:17.740 LINK log_ut 00:03:17.740 CC examples/vmd/led/led.o 00:03:17.740 CC test/nvme/reset/reset.o 00:03:17.740 CXX test/cpp_headers/base64.o 00:03:17.740 CC test/env/vtophys/vtophys.o 00:03:17.740 LINK led 00:03:17.740 CC app/spdk_nvme_identify/identify.o 00:03:17.740 CC test/accel/dif/dif.o 00:03:17.740 LINK reset 00:03:17.740 CC examples/idxd/perf/perf.o 00:03:17.740 LINK vtophys 00:03:17.740 LINK spdk_nvme_perf 00:03:17.740 CXX test/cpp_headers/bdev.o 00:03:17.740 CC test/nvme/sgl/sgl.o 00:03:17.740 LINK common_ut 00:03:17.740 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.740 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.998 LINK idxd_perf 00:03:17.998 LINK iscsi_fuzz 00:03:17.998 LINK sgl 00:03:17.998 CC test/nvme/e2edp/nvme_dp.o 00:03:17.998 LINK dif 00:03:17.998 LINK spdk_nvme_identify 00:03:17.998 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:17.998 LINK env_dpdk_post_init 00:03:17.998 CXX test/cpp_headers/bdev_module.o 00:03:17.998 LINK spdk_nvme_discover 00:03:17.998 CC test/nvme/overhead/overhead.o 00:03:17.998 CC examples/accel/perf/accel_perf.o 00:03:17.998 LINK nvme_dp 00:03:17.998 CC app/spdk_top/spdk_top.o 00:03:17.998 LINK base64_ut 00:03:17.998 LINK overhead 00:03:17.998 CC test/blobfs/mkfs/mkfs.o 00:03:17.998 CC app/fio/nvme/fio_plugin.o 00:03:17.998 CXX test/cpp_headers/bdev_zone.o 00:03:17.998 CC test/env/memory/memory_ut.o 00:03:17.998 LINK accel_perf 00:03:18.256 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:18.256 CC app/fio/bdev/fio_plugin.o 00:03:18.257 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:18.257 CC test/nvme/err_injection/err_injection.o 00:03:18.257 LINK mkfs 00:03:18.257 CXX test/cpp_headers/bit_array.o 00:03:18.257 LINK err_injection 00:03:18.257 CC examples/blob/hello_world/hello_blob.o 00:03:18.257 LINK spdk_top 00:03:18.257 fio_plugin.c:1584:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:18.257 struct spdk_nvme_fdp_ruhs ruhs; 00:03:18.257 ^ 00:03:18.257 LINK bit_array_ut 00:03:18.257 CC examples/nvme/hello_world/hello_world.o 00:03:18.257 LINK spdk_bdev 00:03:18.515 CC test/nvme/startup/startup.o 00:03:18.515 LINK hello_blob 00:03:18.515 LINK dma_ut 00:03:18.515 1 warning generated. 00:03:18.515 CXX test/cpp_headers/bit_pool.o 00:03:18.515 LINK spdk_nvme 00:03:18.515 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:18.515 CC examples/nvme/reconnect/reconnect.o 00:03:18.515 LINK hello_world 00:03:18.515 CC test/env/pci/pci_ut.o 00:03:18.515 LINK startup 00:03:18.515 CC examples/blob/cli/blobcli.o 00:03:18.515 LINK cpuset_ut 00:03:18.515 CXX test/cpp_headers/blob.o 00:03:18.515 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:18.515 LINK reconnect 00:03:18.515 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.515 CXX test/cpp_headers/blob_bdev.o 00:03:18.515 LINK pci_ut 00:03:18.515 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:18.515 CC test/nvme/reserve/reserve.o 00:03:18.515 LINK memory_ut 00:03:18.773 CXX test/cpp_headers/blobfs.o 00:03:18.773 LINK crc16_ut 00:03:18.773 LINK blobcli 00:03:18.773 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.773 LINK hello_bdev 00:03:18.773 CC test/nvme/simple_copy/simple_copy.o 00:03:18.773 LINK reserve 00:03:18.773 LINK ioat_ut 00:03:18.773 CC test/event/event_perf/event_perf.o 00:03:18.773 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:18.773 CC test/nvme/connect_stress/connect_stress.o 00:03:18.773 LINK simple_copy 00:03:18.773 LINK crc32_ieee_ut 00:03:18.773 LINK event_perf 00:03:18.773 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.773 gmake[2]: Nothing to be done for 'all'. 00:03:18.773 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:18.773 LINK nvme_manage 00:03:18.773 CXX test/cpp_headers/conf.o 00:03:18.773 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.773 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:18.773 LINK connect_stress 00:03:18.773 CC test/bdev/bdevio/bdevio.o 00:03:18.773 LINK crc32c_ut 00:03:19.031 CC test/event/reactor/reactor.o 00:03:19.031 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:19.031 LINK crc64_ut 00:03:19.031 CC examples/nvme/arbitration/arbitration.o 00:03:19.031 LINK reactor 00:03:19.031 CXX test/cpp_headers/config.o 00:03:19.031 CC test/nvme/boot_partition/boot_partition.o 00:03:19.031 CXX test/cpp_headers/cpuset.o 00:03:19.031 CC test/event/reactor_perf/reactor_perf.o 00:03:19.031 CC examples/nvme/hotplug/hotplug.o 00:03:19.031 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.031 LINK arbitration 00:03:19.031 LINK bdevio 00:03:19.031 LINK reactor_perf 00:03:19.031 CC examples/nvme/abort/abort.o 00:03:19.031 LINK bdevperf 00:03:19.031 LINK boot_partition 00:03:19.031 LINK cmb_copy 00:03:19.031 LINK hotplug 00:03:19.031 CXX test/cpp_headers/crc16.o 00:03:19.031 CC test/unit/lib/util/file.c/file_ut.o 00:03:19.031 CXX test/cpp_headers/crc32.o 00:03:19.289 CC test/nvme/compliance/nvme_compliance.o 00:03:19.289 LINK abort 00:03:19.289 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.289 LINK file_ut 00:03:19.289 LINK dif_ut 00:03:19.289 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.289 CXX test/cpp_headers/crc64.o 00:03:19.289 CXX test/cpp_headers/dif.o 00:03:19.289 CXX test/cpp_headers/dma.o 00:03:19.289 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:19.289 LINK fused_ordering 00:03:19.289 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:19.289 CC test/nvme/fdp/fdp.o 00:03:19.289 LINK pmr_persistence 00:03:19.289 LINK nvme_compliance 00:03:19.289 CXX test/cpp_headers/endian.o 00:03:19.289 LINK iov_ut 00:03:19.289 CC test/unit/lib/util/math.c/math_ut.o 00:03:19.289 CXX test/cpp_headers/env.o 00:03:19.289 LINK doorbell_aers 00:03:19.289 CC test/unit/lib/util/net.c/net_ut.o 00:03:19.548 CXX test/cpp_headers/env_dpdk.o 00:03:19.548 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:19.548 LINK math_ut 00:03:19.548 LINK fdp 00:03:19.548 CC test/unit/lib/util/string.c/string_ut.o 00:03:19.548 CXX test/cpp_headers/event.o 00:03:19.548 LINK net_ut 00:03:19.548 CXX test/cpp_headers/fd.o 00:03:19.548 CC examples/nvmf/nvmf/nvmf.o 00:03:19.548 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:19.548 CXX test/cpp_headers/fd_group.o 00:03:19.548 CXX test/cpp_headers/file.o 00:03:19.548 CXX test/cpp_headers/ftl.o 00:03:19.548 CXX test/cpp_headers/gpt_spec.o 00:03:19.548 LINK string_ut 00:03:19.548 CXX test/cpp_headers/hexlify.o 00:03:19.548 CXX test/cpp_headers/histogram_data.o 00:03:19.548 CXX test/cpp_headers/idxd.o 00:03:19.548 LINK pipe_ut 00:03:19.548 CXX test/cpp_headers/idxd_spec.o 00:03:19.548 CXX test/cpp_headers/init.o 00:03:19.548 LINK nvmf 00:03:19.548 LINK xor_ut 00:03:19.548 CXX test/cpp_headers/ioat.o 00:03:19.548 CXX test/cpp_headers/ioat_spec.o 00:03:19.548 CXX test/cpp_headers/iscsi_spec.o 00:03:19.805 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:19.805 CXX test/cpp_headers/json.o 00:03:19.805 CXX test/cpp_headers/jsonrpc.o 00:03:19.805 CXX test/cpp_headers/keyring.o 00:03:19.805 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:19.805 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:19.806 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:19.806 CXX test/cpp_headers/keyring_module.o 00:03:19.806 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:19.806 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:19.806 CXX test/cpp_headers/likely.o 00:03:19.806 CXX test/cpp_headers/log.o 00:03:19.806 CXX test/cpp_headers/lvol.o 00:03:19.806 CXX test/cpp_headers/memory.o 00:03:19.806 LINK pci_event_ut 00:03:19.806 LINK json_util_ut 00:03:20.064 CXX test/cpp_headers/mmio.o 00:03:20.064 CXX test/cpp_headers/nbd.o 00:03:20.064 CXX test/cpp_headers/net.o 00:03:20.064 LINK idxd_user_ut 00:03:20.064 CXX test/cpp_headers/notify.o 00:03:20.064 CXX test/cpp_headers/nvme.o 00:03:20.064 CXX test/cpp_headers/nvme_intel.o 00:03:20.064 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.064 LINK idxd_ut 00:03:20.064 LINK json_write_ut 00:03:20.064 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.064 CXX test/cpp_headers/nvme_spec.o 00:03:20.064 CXX test/cpp_headers/nvme_zns.o 00:03:20.064 CXX test/cpp_headers/nvmf.o 00:03:20.064 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.064 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.064 CXX test/cpp_headers/nvmf_spec.o 00:03:20.064 CXX test/cpp_headers/nvmf_transport.o 00:03:20.064 CXX test/cpp_headers/opal.o 00:03:20.064 CXX test/cpp_headers/opal_spec.o 00:03:20.064 LINK json_parse_ut 00:03:20.064 CXX test/cpp_headers/pci_ids.o 00:03:20.064 CXX test/cpp_headers/pipe.o 00:03:20.331 CXX test/cpp_headers/queue.o 00:03:20.331 CXX test/cpp_headers/reduce.o 00:03:20.331 CXX test/cpp_headers/rpc.o 00:03:20.331 CXX test/cpp_headers/scheduler.o 00:03:20.331 CXX test/cpp_headers/scsi.o 00:03:20.331 CXX test/cpp_headers/scsi_spec.o 00:03:20.331 CXX test/cpp_headers/sock.o 00:03:20.331 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:20.331 CXX test/cpp_headers/stdinc.o 00:03:20.331 CXX test/cpp_headers/string.o 00:03:20.331 CXX test/cpp_headers/thread.o 00:03:20.331 CXX test/cpp_headers/trace.o 00:03:20.331 CXX test/cpp_headers/trace_parser.o 00:03:20.331 CXX test/cpp_headers/tree.o 00:03:20.331 CXX test/cpp_headers/ublk.o 00:03:20.331 CXX test/cpp_headers/util.o 00:03:20.331 CXX test/cpp_headers/uuid.o 00:03:20.331 LINK jsonrpc_server_ut 00:03:20.331 CXX test/cpp_headers/version.o 00:03:20.331 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.331 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.331 CXX test/cpp_headers/vhost.o 00:03:20.331 CXX test/cpp_headers/vmd.o 00:03:20.331 CXX test/cpp_headers/xor.o 00:03:20.610 CXX test/cpp_headers/zipf.o 00:03:20.610 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:20.610 LINK rpc_ut 00:03:20.867 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:20.867 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:20.867 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:20.867 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:20.867 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:20.867 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:21.126 LINK iobuf_ut 00:03:21.126 LINK keyring_ut 00:03:21.126 LINK notify_ut 00:03:21.126 LINK posix_ut 00:03:21.384 LINK thread_ut 00:03:21.384 LINK sock_ut 00:03:21.384 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:21.384 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:21.384 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:21.384 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:21.641 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:21.641 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:21.641 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:21.641 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:21.641 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:21.641 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:21.641 LINK rpc_ut 00:03:21.641 LINK subsystem_ut 00:03:21.641 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:21.641 LINK blob_bdev_ut 00:03:21.641 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:21.899 CC test/unit/lib/event/app.c/app_ut.o 00:03:21.899 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:21.899 LINK accel_ut 00:03:21.899 LINK nvme_ctrlr_cmd_ut 00:03:21.899 LINK app_ut 00:03:21.899 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:22.157 LINK nvme_ns_ut 00:03:22.157 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:22.157 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:22.157 LINK nvme_ut 00:03:22.157 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:22.157 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:22.157 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:22.157 LINK reactor_ut 00:03:22.415 LINK nvme_ctrlr_ut 00:03:22.415 LINK nvme_ns_ocssd_cmd_ut 00:03:22.415 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:22.415 LINK nvme_ns_cmd_ut 00:03:22.415 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:22.415 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:22.415 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:22.415 LINK scsi_nvme_ut 00:03:22.674 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:22.674 LINK gpt_ut 00:03:22.674 LINK nvme_poll_group_ut 00:03:22.674 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:22.674 LINK nvme_qpair_ut 00:03:22.674 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:22.674 LINK nvme_quirks_ut 00:03:22.674 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:22.674 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:22.932 LINK nvme_pcie_ut 00:03:22.932 LINK blob_ut 00:03:22.932 LINK part_ut 00:03:22.932 LINK vbdev_lvol_ut 00:03:22.932 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:22.932 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:22.932 LINK bdev_zone_ut 00:03:22.932 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:22.932 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:22.932 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:22.932 LINK tree_ut 00:03:23.191 LINK bdev_ut 00:03:23.191 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:23.191 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:23.191 LINK nvme_transport_ut 00:03:23.191 LINK vbdev_zone_block_ut 00:03:23.191 LINK bdev_raid_ut 00:03:23.191 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:23.191 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:23.449 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:23.449 LINK nvme_io_msg_ut 00:03:23.449 LINK nvme_tcp_ut 00:03:23.449 LINK bdev_ut 00:03:23.449 LINK bdev_raid_sb_ut 00:03:23.449 LINK blobfs_async_ut 00:03:23.449 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:23.449 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:23.449 LINK lvol_ut 00:03:23.449 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:23.449 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:23.449 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:23.449 LINK nvme_pcie_common_ut 00:03:23.449 LINK blobfs_sync_ut 00:03:23.449 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:23.449 LINK blobfs_bdev_ut 00:03:23.708 LINK nvme_fabric_ut 00:03:23.708 LINK concat_ut 00:03:23.708 LINK raid1_ut 00:03:23.708 LINK nvme_opal_ut 00:03:23.708 LINK raid0_ut 00:03:24.277 LINK nvme_rdma_ut 00:03:24.277 LINK bdev_nvme_ut 00:03:24.536 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:24.536 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:24.536 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:24.536 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:24.536 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:24.536 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:24.536 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:24.536 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:24.536 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:24.536 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:24.536 LINK scsi_ut 00:03:24.536 LINK dev_ut 00:03:24.796 LINK scsi_pr_ut 00:03:24.796 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:24.796 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:24.796 LINK lun_ut 00:03:24.796 LINK ctrlr_bdev_ut 00:03:24.796 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:24.796 LINK scsi_bdev_ut 00:03:24.796 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:24.796 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:24.796 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:25.056 LINK ctrlr_discovery_ut 00:03:25.056 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:25.056 LINK subsystem_ut 00:03:25.056 LINK nvmf_ut 00:03:25.056 LINK init_grp_ut 00:03:25.056 LINK ctrlr_ut 00:03:25.056 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:25.056 LINK auth_ut 00:03:25.056 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:25.056 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:25.314 LINK conn_ut 00:03:25.315 LINK param_ut 00:03:25.315 LINK tcp_ut 00:03:25.315 LINK transport_ut 00:03:25.315 LINK rdma_ut 00:03:25.315 LINK portal_grp_ut 00:03:25.574 LINK tgt_node_ut 00:03:25.574 LINK iscsi_ut 00:03:25.574 00:03:25.574 real 1m9.565s 00:03:25.574 user 3m52.274s 00:03:25.574 sys 0m50.597s 00:03:25.574 02:28:12 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:25.574 02:28:12 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:25.574 ************************************ 00:03:25.574 END TEST unittest_build 00:03:25.574 ************************************ 00:03:25.574 02:28:12 -- common/autotest_common.sh@1142 -- $ return 0 00:03:25.574 02:28:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.574 02:28:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.574 02:28:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.574 02:28:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.574 02:28:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.574 02:28:12 -- pm/common@44 -- $ pid=1326 00:03:25.574 02:28:12 -- pm/common@50 -- $ kill -TERM 1326 00:03:25.834 02:28:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.834 02:28:12 -- nvmf/common.sh@7 -- # uname -s 00:03:25.834 02:28:12 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:25.834 02:28:12 -- nvmf/common.sh@7 -- # return 0 00:03:25.834 02:28:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.834 02:28:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.834 02:28:12 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:25.834 02:28:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.834 02:28:12 -- pm/common@17 -- # local monitor 00:03:25.834 02:28:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.834 02:28:12 -- pm/common@25 -- # sleep 1 00:03:25.834 02:28:12 -- pm/common@21 -- # date +%s 00:03:25.834 02:28:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721874492 00:03:25.834 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721874492_collect-vmstat.pm.log 00:03:26.773 02:28:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.773 02:28:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.773 02:28:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:26.773 02:28:13 -- common/autotest_common.sh@10 -- # set +x 00:03:26.773 02:28:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.773 02:28:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:26.773 02:28:13 -- common/autotest_common.sh@10 -- # set +x 00:03:27.033 02:28:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.033 02:28:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.033 02:28:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.033 02:28:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.033 02:28:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.033 02:28:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.033 02:28:13 -- common/autotest_common.sh@1455 -- # uname 00:03:27.033 02:28:13 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:03:27.033 02:28:13 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:03:27.033 kldunload: can't find file contigmem.ko 00:03:27.033 02:28:13 -- common/autotest_common.sh@1456 -- # true 00:03:27.033 02:28:13 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:03:27.033 02:28:13 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:27.033 02:28:13 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:27.033 02:28:13 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:27.033 02:28:13 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:27.033 02:28:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.033 02:28:13 -- common/autotest_common.sh@1475 -- # uname 00:03:27.033 02:28:13 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:03:27.033 02:28:13 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:03:27.033 02:28:13 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:03:27.033 02:28:13 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:27.033 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:27.033 02:28:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:27.033 02:28:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:27.033 02:28:13 -- spdk/autotest.sh@72 -- # hash lcov 00:03:27.033 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:27.033 02:28:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:27.033 02:28:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:27.033 02:28:13 -- common/autotest_common.sh@10 -- # set +x 00:03:27.033 02:28:13 -- spdk/autotest.sh@91 -- # rm -f 00:03:27.033 02:28:13 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:27.033 kldunload: can't find file contigmem.ko 00:03:27.033 kldunload: can't find file nic_uio.ko 00:03:27.033 02:28:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:27.033 02:28:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.033 02:28:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.033 02:28:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.033 02:28:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:27.033 02:28:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.033 02:28:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:27.033 02:28:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:27.033 02:28:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:27.033 02:28:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:27.033 nvme0ns1 is not a block device 00:03:27.033 02:28:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:27.033 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:27.033 02:28:13 -- scripts/common.sh@391 -- # pt= 00:03:27.033 02:28:13 -- scripts/common.sh@392 -- # return 1 00:03:27.033 02:28:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:27.033 1+0 records in 00:03:27.033 1+0 records out 00:03:27.033 1048576 bytes transferred in 0.008094 secs (129547165 bytes/sec) 00:03:27.033 02:28:13 -- spdk/autotest.sh@118 -- # sync 00:03:27.972 02:28:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.972 02:28:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.972 02:28:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:28.911 02:28:15 -- spdk/autotest.sh@124 -- # uname -s 00:03:28.911 02:28:15 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:28.911 02:28:15 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:28.911 Contigmem (not present) 00:03:28.911 Buffer Size: not set 00:03:28.911 Num Buffers: not set 00:03:28.911 00:03:28.911 00:03:28.911 Type BDF Vendor Device Driver 00:03:28.911 NVMe 0:16:0 0x1b36 0x0010 nvme0 00:03:28.911 02:28:15 -- spdk/autotest.sh@130 -- # uname -s 00:03:28.911 02:28:15 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:28.911 02:28:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:28.911 02:28:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:28.911 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:03:28.911 02:28:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:28.911 02:28:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:28.911 02:28:15 -- common/autotest_common.sh@10 -- # set +x 00:03:28.911 02:28:15 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:29.171 hw.nic_uio.bdfs="0:16:0" 00:03:29.171 hw.contigmem.num_buffers="8" 00:03:29.171 hw.contigmem.buffer_size="268435456" 00:03:29.739 02:28:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:29.739 02:28:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:29.739 02:28:16 -- common/autotest_common.sh@10 -- # set +x 00:03:29.739 02:28:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:29.739 02:28:16 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:29.739 02:28:16 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:29.739 02:28:16 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:29.739 02:28:16 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:29.739 02:28:16 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:29.739 02:28:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:29.739 02:28:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:29.739 02:28:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.739 02:28:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:29.739 02:28:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:29.739 02:28:16 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:29.739 02:28:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:03:29.739 02:28:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:29.739 02:28:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:29.739 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:29.739 02:28:16 -- common/autotest_common.sh@1580 -- # device= 00:03:29.739 02:28:16 -- common/autotest_common.sh@1580 -- # true 00:03:29.739 02:28:16 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:29.739 02:28:16 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:29.739 02:28:16 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:29.739 02:28:16 -- common/autotest_common.sh@1593 -- # return 0 00:03:29.739 02:28:16 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:29.739 02:28:16 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:29.739 02:28:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.739 02:28:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.739 02:28:16 -- common/autotest_common.sh@10 -- # set +x 00:03:29.739 ************************************ 00:03:29.739 START TEST unittest 00:03:29.739 ************************************ 00:03:29.739 02:28:16 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:29.739 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:29.739 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:29.739 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:29.739 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:29.739 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:29.739 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:29.739 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:29.739 ++ rpc_py=rpc_cmd 00:03:29.739 ++ set -e 00:03:29.739 ++ shopt -s nullglob 00:03:29.739 ++ shopt -s extglob 00:03:29.739 ++ shopt -s inherit_errexit 00:03:29.739 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:29.739 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:29.739 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:29.739 +++ CONFIG_WPDK_DIR= 00:03:29.739 +++ CONFIG_ASAN=n 00:03:29.739 +++ CONFIG_VBDEV_COMPRESS=n 00:03:29.739 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:29.739 +++ CONFIG_USDT=n 00:03:29.739 +++ CONFIG_CUSTOMOCF=n 00:03:29.739 +++ CONFIG_PREFIX=/usr/local 00:03:29.739 +++ CONFIG_RBD=n 00:03:29.740 +++ CONFIG_LIBDIR= 00:03:29.740 +++ CONFIG_IDXD=y 00:03:29.740 +++ CONFIG_NVME_CUSE=n 00:03:29.740 +++ CONFIG_SMA=n 00:03:29.740 +++ CONFIG_VTUNE=n 00:03:29.740 +++ CONFIG_TSAN=n 00:03:29.740 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:29.740 +++ CONFIG_VFIO_USER_DIR= 00:03:29.740 +++ CONFIG_PGO_CAPTURE=n 00:03:29.740 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:29.740 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:29.740 +++ CONFIG_LTO=n 00:03:29.740 +++ CONFIG_ISCSI_INITIATOR=n 00:03:29.740 +++ CONFIG_CET=n 00:03:29.740 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:29.740 +++ CONFIG_OCF_PATH= 00:03:29.740 +++ CONFIG_RDMA_SET_TOS=y 00:03:29.740 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:29.740 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:29.740 +++ CONFIG_UBLK=n 00:03:29.740 +++ CONFIG_ISAL_CRYPTO=y 00:03:29.740 +++ CONFIG_OPENSSL_PATH= 00:03:29.740 +++ CONFIG_OCF=n 00:03:29.740 +++ CONFIG_FUSE=n 00:03:29.740 +++ CONFIG_VTUNE_DIR= 00:03:29.740 +++ CONFIG_FUZZER_LIB= 00:03:29.740 +++ CONFIG_FUZZER=n 00:03:29.740 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:29.740 +++ CONFIG_CRYPTO=n 00:03:29.740 +++ CONFIG_PGO_USE=n 00:03:29.740 +++ CONFIG_VHOST=n 00:03:29.740 +++ CONFIG_DAOS=n 00:03:29.740 +++ CONFIG_DPDK_INC_DIR= 00:03:29.740 +++ CONFIG_DAOS_DIR= 00:03:29.740 +++ CONFIG_UNIT_TESTS=y 00:03:29.740 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:29.740 +++ CONFIG_VIRTIO=n 00:03:29.740 +++ CONFIG_DPDK_UADK=n 00:03:29.740 +++ CONFIG_COVERAGE=n 00:03:29.740 +++ CONFIG_RDMA=y 00:03:29.740 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:29.740 +++ CONFIG_URING_PATH= 00:03:29.740 +++ CONFIG_XNVME=n 00:03:29.740 +++ CONFIG_VFIO_USER=n 00:03:29.740 +++ CONFIG_ARCH=native 00:03:29.740 +++ CONFIG_HAVE_EVP_MAC=y 00:03:29.740 +++ CONFIG_URING_ZNS=n 00:03:29.740 +++ CONFIG_WERROR=y 00:03:29.740 +++ CONFIG_HAVE_LIBBSD=n 00:03:29.740 +++ CONFIG_UBSAN=n 00:03:29.740 +++ CONFIG_IPSEC_MB_DIR= 00:03:29.740 +++ CONFIG_GOLANG=n 00:03:29.740 +++ CONFIG_ISAL=y 00:03:29.740 +++ CONFIG_IDXD_KERNEL=n 00:03:29.740 +++ CONFIG_DPDK_LIB_DIR= 00:03:29.740 +++ CONFIG_RDMA_PROV=verbs 00:03:29.740 +++ CONFIG_APPS=y 00:03:29.740 +++ CONFIG_SHARED=n 00:03:29.740 +++ CONFIG_HAVE_KEYUTILS=n 00:03:29.740 +++ CONFIG_FC_PATH= 00:03:29.740 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:29.740 +++ CONFIG_FC=n 00:03:29.740 +++ CONFIG_AVAHI=n 00:03:29.740 +++ CONFIG_FIO_PLUGIN=y 00:03:29.740 +++ CONFIG_RAID5F=n 00:03:29.740 +++ CONFIG_EXAMPLES=y 00:03:29.740 +++ CONFIG_TESTS=y 00:03:29.740 +++ CONFIG_CRYPTO_MLX5=n 00:03:29.740 +++ CONFIG_MAX_LCORES=128 00:03:29.740 +++ CONFIG_IPSEC_MB=n 00:03:29.740 +++ CONFIG_PGO_DIR= 00:03:29.740 +++ CONFIG_DEBUG=y 00:03:29.740 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:29.740 +++ CONFIG_CROSS_PREFIX= 00:03:29.740 +++ CONFIG_URING=n 00:03:29.740 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:29.740 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:29.740 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:29.740 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:29.740 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:29.740 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:29.740 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:29.740 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:29.740 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:29.740 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:29.740 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:29.740 +++ VHOST_APP=("$_app_dir/vhost") 00:03:29.740 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:29.740 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:29.740 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:29.740 +++ [[ #ifndef SPDK_CONFIG_H 00:03:29.740 #define SPDK_CONFIG_H 00:03:29.740 #define SPDK_CONFIG_APPS 1 00:03:29.740 #define SPDK_CONFIG_ARCH native 00:03:29.740 #undef SPDK_CONFIG_ASAN 00:03:29.740 #undef SPDK_CONFIG_AVAHI 00:03:29.740 #undef SPDK_CONFIG_CET 00:03:29.740 #undef SPDK_CONFIG_COVERAGE 00:03:29.740 #define SPDK_CONFIG_CROSS_PREFIX 00:03:29.740 #undef SPDK_CONFIG_CRYPTO 00:03:29.740 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:29.740 #undef SPDK_CONFIG_CUSTOMOCF 00:03:29.740 #undef SPDK_CONFIG_DAOS 00:03:29.740 #define SPDK_CONFIG_DAOS_DIR 00:03:29.740 #define SPDK_CONFIG_DEBUG 1 00:03:29.740 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:29.740 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:29.740 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:29.740 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:29.740 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:29.740 #undef SPDK_CONFIG_DPDK_UADK 00:03:29.740 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:29.740 #define SPDK_CONFIG_EXAMPLES 1 00:03:29.740 #undef SPDK_CONFIG_FC 00:03:29.740 #define SPDK_CONFIG_FC_PATH 00:03:29.740 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:29.740 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:29.740 #undef SPDK_CONFIG_FUSE 00:03:29.740 #undef SPDK_CONFIG_FUZZER 00:03:29.740 #define SPDK_CONFIG_FUZZER_LIB 00:03:29.740 #undef SPDK_CONFIG_GOLANG 00:03:29.740 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:29.740 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:29.740 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:29.740 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:29.740 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:29.740 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:29.740 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:29.740 #define SPDK_CONFIG_IDXD 1 00:03:29.740 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:29.740 #undef SPDK_CONFIG_IPSEC_MB 00:03:29.740 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:29.740 #define SPDK_CONFIG_ISAL 1 00:03:29.740 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:29.740 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:29.740 #define SPDK_CONFIG_LIBDIR 00:03:29.740 #undef SPDK_CONFIG_LTO 00:03:29.740 #define SPDK_CONFIG_MAX_LCORES 128 00:03:29.740 #undef SPDK_CONFIG_NVME_CUSE 00:03:29.740 #undef SPDK_CONFIG_OCF 00:03:29.740 #define SPDK_CONFIG_OCF_PATH 00:03:29.740 #define SPDK_CONFIG_OPENSSL_PATH 00:03:29.740 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:29.740 #define SPDK_CONFIG_PGO_DIR 00:03:29.740 #undef SPDK_CONFIG_PGO_USE 00:03:29.740 #define SPDK_CONFIG_PREFIX /usr/local 00:03:29.740 #undef SPDK_CONFIG_RAID5F 00:03:29.740 #undef SPDK_CONFIG_RBD 00:03:29.740 #define SPDK_CONFIG_RDMA 1 00:03:29.740 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:29.740 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:29.740 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:29.740 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:29.740 #undef SPDK_CONFIG_SHARED 00:03:29.740 #undef SPDK_CONFIG_SMA 00:03:29.740 #define SPDK_CONFIG_TESTS 1 00:03:29.740 #undef SPDK_CONFIG_TSAN 00:03:29.740 #undef SPDK_CONFIG_UBLK 00:03:29.740 #undef SPDK_CONFIG_UBSAN 00:03:29.740 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:29.740 #undef SPDK_CONFIG_URING 00:03:29.740 #define SPDK_CONFIG_URING_PATH 00:03:29.740 #undef SPDK_CONFIG_URING_ZNS 00:03:29.740 #undef SPDK_CONFIG_USDT 00:03:29.740 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:29.740 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:29.740 #undef SPDK_CONFIG_VFIO_USER 00:03:29.740 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:29.740 #undef SPDK_CONFIG_VHOST 00:03:29.740 #undef SPDK_CONFIG_VIRTIO 00:03:29.740 #undef SPDK_CONFIG_VTUNE 00:03:29.740 #define SPDK_CONFIG_VTUNE_DIR 00:03:29.740 #define SPDK_CONFIG_WERROR 1 00:03:29.740 #define SPDK_CONFIG_WPDK_DIR 00:03:29.740 #undef SPDK_CONFIG_XNVME 00:03:29.740 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:29.740 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:29.740 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:29.740 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:29.740 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.740 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.740 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:29.740 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:29.740 ++++ export PATH 00:03:29.740 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:29.740 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:29.740 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:29.740 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:30.001 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:30.001 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:30.001 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:30.001 +++ TEST_TAG=N/A 00:03:30.001 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:30.001 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:30.001 ++++ uname -s 00:03:30.001 +++ PM_OS=FreeBSD 00:03:30.001 +++ MONITOR_RESOURCES_SUDO=() 00:03:30.001 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:30.001 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:30.001 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:30.001 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:30.001 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:30.001 +++ SUDO[0]= 00:03:30.001 +++ SUDO[1]='sudo -E' 00:03:30.001 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:30.001 +++ [[ FreeBSD == FreeBSD ]] 00:03:30.001 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:30.001 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:30.001 ++ : 0 00:03:30.001 ++ export RUN_NIGHTLY 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_RUN_VALGRIND 00:03:30.001 ++ : 1 00:03:30.001 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:30.001 ++ : 1 00:03:30.001 ++ export SPDK_TEST_UNITTEST 00:03:30.001 ++ : 00:03:30.001 ++ export SPDK_TEST_AUTOBUILD 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_RELEASE_BUILD 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_ISAL 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_ISCSI 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:30.001 ++ : 1 00:03:30.001 ++ export SPDK_TEST_NVME 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVME_PMR 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVME_BP 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVME_CLI 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVME_CUSE 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVME_FDP 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVMF 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VFIOUSER 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_FUZZER 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_FUZZER_SHORT 00:03:30.001 ++ : rdma 00:03:30.001 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_RBD 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VHOST 00:03:30.001 ++ : 1 00:03:30.001 ++ export SPDK_TEST_BLOCKDEV 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_IOAT 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_BLOBFS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VHOST_INIT 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_LVOL 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_RUN_ASAN 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_RUN_UBSAN 00:03:30.001 ++ : 00:03:30.001 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_RUN_NON_ROOT 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_CRYPTO 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_FTL 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_OCF 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_VMD 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_OPAL 00:03:30.001 ++ : 00:03:30.001 ++ export SPDK_TEST_NATIVE_DPDK 00:03:30.001 ++ : true 00:03:30.001 ++ export SPDK_AUTOTEST_X 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_RAID5 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_URING 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_USDT 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_USE_IGB_UIO 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_SCHEDULER 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_SCANBUILD 00:03:30.001 ++ : 00:03:30.001 ++ export SPDK_TEST_NVMF_NICS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_SMA 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_DAOS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_XNVME 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_ACCEL_DSA 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_ACCEL_IAA 00:03:30.001 ++ : 00:03:30.001 ++ export SPDK_TEST_FUZZER_TARGET 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_TEST_NVMF_MDNS 00:03:30.001 ++ : 0 00:03:30.001 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:30.001 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:30.001 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:30.001 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:30.001 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:30.001 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.001 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.001 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.001 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.001 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:30.001 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:30.001 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:30.001 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:30.001 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:30.001 ++ PYTHONDONTWRITEBYTECODE=1 00:03:30.001 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:30.001 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:30.001 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:30.001 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:30.001 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:30.001 ++ rm -rf /var/tmp/asan_suppression_file 00:03:30.001 ++ cat 00:03:30.001 ++ echo leak:libfuse3.so 00:03:30.001 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:30.001 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:30.001 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:30.001 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:30.001 ++ '[' -z /var/spdk/dependencies ']' 00:03:30.001 ++ export DEPENDENCY_DIR 00:03:30.001 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:30.001 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:30.001 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:30.001 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:30.001 ++ export QEMU_BIN= 00:03:30.001 ++ QEMU_BIN= 00:03:30.001 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:30.001 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:30.001 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:30.001 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:30.002 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:30.002 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:30.002 ++ '[' 0 -eq 0 ']' 00:03:30.002 ++ export valgrind= 00:03:30.002 ++ valgrind= 00:03:30.002 +++ uname -s 00:03:30.002 ++ '[' FreeBSD = Linux ']' 00:03:30.002 +++ uname -s 00:03:30.002 ++ '[' FreeBSD = FreeBSD ']' 00:03:30.002 ++ MAKE=gmake 00:03:30.002 +++ sysctl -a 00:03:30.002 +++ grep -E -i hw.ncpu 00:03:30.002 +++ awk '{print $2}' 00:03:30.002 ++ MAKEFLAGS=-j10 00:03:30.002 ++ HUGEMEM=2048 00:03:30.002 ++ export HUGEMEM=2048 00:03:30.002 ++ HUGEMEM=2048 00:03:30.002 ++ NO_HUGE=() 00:03:30.002 ++ TEST_MODE= 00:03:30.002 ++ [[ -z '' ]] 00:03:30.002 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:30.002 ++ exec 00:03:30.002 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:30.002 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:30.002 ++ set_test_storage 2147483648 00:03:30.002 ++ [[ -v testdir ]] 00:03:30.002 ++ local requested_size=2147483648 00:03:30.002 ++ local mount target_dir 00:03:30.002 ++ local -A mounts fss sizes avails uses 00:03:30.002 ++ local source fs size avail mount use 00:03:30.002 ++ local storage_fallback storage_candidates 00:03:30.002 +++ mktemp -udt spdk.XXXXXX 00:03:30.002 ++ storage_fallback=/tmp/spdk.XXXXXX.cOyP0Hv0Q1 00:03:30.002 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:30.002 ++ [[ -n '' ]] 00:03:30.002 ++ [[ -n '' ]] 00:03:30.002 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.cOyP0Hv0Q1/tests/unit /tmp/spdk.XXXXXX.cOyP0Hv0Q1 00:03:30.002 ++ requested_size=2214592512 00:03:30.002 ++ read -r source fs size use avail _ mount 00:03:30.002 +++ df -T 00:03:30.002 +++ grep -v Filesystem 00:03:30.002 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:30.002 ++ fss["$mount"]=ufs 00:03:30.002 ++ avails["$mount"]=17234931712 00:03:30.002 ++ sizes["$mount"]=31182712832 00:03:30.002 ++ uses["$mount"]=11453165568 00:03:30.002 ++ read -r source fs size use avail _ mount 00:03:30.002 ++ mounts["$mount"]=devfs 00:03:30.002 ++ fss["$mount"]=devfs 00:03:30.002 ++ avails["$mount"]=1024 00:03:30.002 ++ sizes["$mount"]=1024 00:03:30.002 ++ uses["$mount"]=0 00:03:30.002 ++ read -r source fs size use avail _ mount 00:03:30.002 ++ mounts["$mount"]=tmpfs 00:03:30.002 ++ fss["$mount"]=tmpfs 00:03:30.002 ++ avails["$mount"]=2147438592 00:03:30.002 ++ sizes["$mount"]=2147483648 00:03:30.002 ++ uses["$mount"]=45056 00:03:30.002 ++ read -r source fs size use avail _ mount 00:03:30.002 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd14-libvirt/output 00:03:30.002 ++ fss["$mount"]=fusefs.sshfs 00:03:30.002 ++ avails["$mount"]=95427833856 00:03:30.002 ++ sizes["$mount"]=105088212992 00:03:30.002 ++ uses["$mount"]=4274946048 00:03:30.002 ++ read -r source fs size use avail _ mount 00:03:30.002 ++ printf '* Looking for test storage...\n' 00:03:30.002 * Looking for test storage... 00:03:30.002 ++ local target_space new_size 00:03:30.002 ++ for target_dir in "${storage_candidates[@]}" 00:03:30.002 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:30.002 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:30.002 ++ mount=/ 00:03:30.002 ++ target_space=17234931712 00:03:30.002 ++ (( target_space == 0 || target_space < requested_size )) 00:03:30.002 ++ (( target_space >= requested_size )) 00:03:30.002 ++ [[ ufs == tmpfs ]] 00:03:30.002 ++ [[ ufs == ramfs ]] 00:03:30.002 ++ [[ / == / ]] 00:03:30.002 ++ new_size=13667758080 00:03:30.002 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:30.002 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.002 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.002 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:30.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:30.002 ++ return 0 00:03:30.002 ++ set -o errtrace 00:03:30.002 ++ shopt -s extdebug 00:03:30.002 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:30.002 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1687 -- # true 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@29 -- # exec 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:30.002 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.002 ************************************ 00:03:30.002 START TEST unittest_pci_event 00:03:30.002 ************************************ 00:03:30.002 02:28:16 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:30.002 00:03:30.002 00:03:30.002 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.002 http://cunit.sourceforge.net/ 00:03:30.002 00:03:30.002 00:03:30.002 Suite: pci_event 00:03:30.002 Test: test_pci_parse_event ...passed 00:03:30.002 00:03:30.002 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.002 suites 1 1 n/a 0 0 00:03:30.002 tests 1 1 1 0 0 00:03:30.002 asserts 1 1 1 0 n/a 00:03:30.002 00:03:30.002 Elapsed time = 0.000 seconds 00:03:30.002 00:03:30.002 real 0m0.034s 00:03:30.002 user 0m0.007s 00:03:30.002 sys 0m0.011s 00:03:30.002 02:28:16 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.002 02:28:16 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:30.002 ************************************ 00:03:30.002 END TEST unittest_pci_event 00:03:30.002 ************************************ 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:30.002 02:28:16 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.002 02:28:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.002 ************************************ 00:03:30.002 START TEST unittest_include 00:03:30.002 ************************************ 00:03:30.002 02:28:16 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:30.002 00:03:30.002 00:03:30.002 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.002 http://cunit.sourceforge.net/ 00:03:30.002 00:03:30.002 00:03:30.002 Suite: histogram 00:03:30.002 Test: histogram_test ...passed 00:03:30.002 Test: histogram_merge ...passed 00:03:30.002 00:03:30.002 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.002 suites 1 1 n/a 0 0 00:03:30.002 tests 2 2 2 0 0 00:03:30.002 asserts 50 50 50 0 n/a 00:03:30.002 00:03:30.002 Elapsed time = 0.008 seconds 00:03:30.002 00:03:30.002 real 0m0.011s 00:03:30.002 user 0m0.009s 00:03:30.002 sys 0m0.001s 00:03:30.002 02:28:16 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.002 02:28:16 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:30.002 ************************************ 00:03:30.002 END TEST unittest_include 00:03:30.002 ************************************ 00:03:30.263 02:28:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:30.263 02:28:16 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:30.263 02:28:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.263 02:28:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.263 02:28:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.263 ************************************ 00:03:30.263 START TEST unittest_bdev 00:03:30.263 ************************************ 00:03:30.263 02:28:16 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:03:30.263 02:28:16 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:30.263 00:03:30.263 00:03:30.263 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.263 http://cunit.sourceforge.net/ 00:03:30.263 00:03:30.263 00:03:30.263 Suite: bdev 00:03:30.263 Test: bytes_to_blocks_test ...passed 00:03:30.263 Test: num_blocks_test ...passed 00:03:30.263 Test: io_valid_test ...passed 00:03:30.263 Test: open_write_test ...[2024-07-25 02:28:16.929330] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:30.263 [2024-07-25 02:28:16.929680] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:30.263 [2024-07-25 02:28:16.929724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:30.263 passed 00:03:30.263 Test: claim_test ...passed 00:03:30.263 Test: alias_add_del_test ...[2024-07-25 02:28:16.934221] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:30.263 [2024-07-25 02:28:16.934253] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4663:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:30.264 [2024-07-25 02:28:16.934267] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:30.264 passed 00:03:30.264 Test: get_device_stat_test ...passed 00:03:30.264 Test: bdev_io_types_test ...passed 00:03:30.264 Test: bdev_io_wait_test ...passed 00:03:30.264 Test: bdev_io_spans_split_test ...passed 00:03:30.264 Test: bdev_io_boundary_split_test ...passed 00:03:30.264 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-25 02:28:16.941744] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3214:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:30.264 passed 00:03:30.264 Test: bdev_io_mix_split_test ...passed 00:03:30.264 Test: bdev_io_split_with_io_wait ...passed 00:03:30.264 Test: bdev_io_write_unit_split_test ...[2024-07-25 02:28:16.945666] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:30.264 [2024-07-25 02:28:16.945702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:30.264 [2024-07-25 02:28:16.945712] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:30.264 [2024-07-25 02:28:16.945725] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2766:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:30.264 passed 00:03:30.264 Test: bdev_io_alignment_with_boundary ...passed 00:03:30.264 Test: bdev_io_alignment ...passed 00:03:30.264 Test: bdev_histograms ...passed 00:03:30.264 Test: bdev_write_zeroes ...passed 00:03:30.264 Test: bdev_compare_and_write ...passed 00:03:30.264 Test: bdev_compare ...passed 00:03:30.264 Test: bdev_compare_emulated ...passed 00:03:30.264 Test: bdev_zcopy_write ...passed 00:03:30.264 Test: bdev_zcopy_read ...passed 00:03:30.264 Test: bdev_open_while_hotremove ...passed 00:03:30.264 Test: bdev_close_while_hotremove ...passed 00:03:30.264 Test: bdev_open_ext_test ...[2024-07-25 02:28:16.957680] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:30.264 passed 00:03:30.264 Test: bdev_open_ext_unregister ...passed 00:03:30.264 Test: bdev_set_io_timeout ...[2024-07-25 02:28:16.957713] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8217:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:30.264 passed 00:03:30.264 Test: bdev_set_qd_sampling ...passed 00:03:30.264 Test: lba_range_overlap ...passed 00:03:30.264 Test: lock_lba_range_check_ranges ...passed 00:03:30.264 Test: lock_lba_range_with_io_outstanding ...passed 00:03:30.264 Test: lock_lba_range_overlapped ...passed 00:03:30.264 Test: bdev_quiesce ...[2024-07-25 02:28:16.962590] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10186:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:30.264 passed 00:03:30.264 Test: bdev_io_abort ...passed 00:03:30.264 Test: bdev_unmap ...passed 00:03:30.264 Test: bdev_write_zeroes_split_test ...passed 00:03:30.264 Test: bdev_set_options_test ...passed 00:03:30.264 Test: bdev_get_memory_domains ...passed 00:03:30.264 Test: bdev_io_ext ...[2024-07-25 02:28:16.965446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:30.264 passed 00:03:30.264 Test: bdev_io_ext_no_opts ...passed 00:03:30.264 Test: bdev_io_ext_invalid_opts ...passed 00:03:30.264 Test: bdev_io_ext_split ...passed 00:03:30.264 Test: bdev_io_ext_bounce_buffer ...passed 00:03:30.264 Test: bdev_register_uuid_alias ...[2024-07-25 02:28:16.970163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 8ce6e8d1-4a2d-11ef-9c8e-7947904e2597 already exists 00:03:30.264 [2024-07-25 02:28:16.970184] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:8ce6e8d1-4a2d-11ef-9c8e-7947904e2597 alias for bdev bdev0 00:03:30.264 passed 00:03:30.264 Test: bdev_unregister_by_name ...passed 00:03:30.264 Test: for_each_bdev_test ...[2024-07-25 02:28:16.970382] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8007:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:30.264 [2024-07-25 02:28:16.970389] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8016:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:30.264 passed 00:03:30.264 Test: bdev_seek_test ...passed 00:03:30.264 Test: bdev_copy ...passed 00:03:30.264 Test: bdev_copy_split_test ...passed 00:03:30.264 Test: examine_locks ...passed 00:03:30.264 Test: claim_v2_rwo ...[2024-07-25 02:28:16.973055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 passed 00:03:30.264 Test: claim_v2_rom ...[2024-07-25 02:28:16.973067] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973085] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973092] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973111] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8737:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:30.264 [2024-07-25 02:28:16.973131] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973137] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.264 passed 00:03:30.264 Test: claim_v2_rwm ...[2024-07-25 02:28:16.973144] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973150] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973157] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8779:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:30.264 [2024-07-25 02:28:16.973163] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.264 [2024-07-25 02:28:16.973178] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8810:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:30.264 [2024-07-25 02:28:16.973185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973191] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973197] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973202] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973209] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8829:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:30.264 passed 00:03:30.264 Test: claim_v2_existing_writer ...passed 00:03:30.264 Test: claim_v2_existing_v1 ...[2024-07-25 02:28:16.973216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8810:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:30.264 [2024-07-25 02:28:16.973271] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.264 [2024-07-25 02:28:16.973278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8775:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.264 [2024-07-25 02:28:16.973291] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973297] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973303] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.264 passed 00:03:30.264 Test: claim_v1_existing_v2 ...passed 00:03:30.264 Test: examine_claimed ...passed 00:03:30.264 00:03:30.264 [2024-07-25 02:28:16.973316] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973323] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973330] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8578:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.264 [2024-07-25 02:28:16.973356] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:30.264 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.264 suites 1 1 n/a 0 0 00:03:30.264 tests 59 59 59 0 0 00:03:30.264 asserts 4599 4599 4599 0 n/a 00:03:30.264 00:03:30.264 Elapsed time = 0.055 seconds 00:03:30.264 02:28:16 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:30.264 00:03:30.264 00:03:30.264 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.264 http://cunit.sourceforge.net/ 00:03:30.264 00:03:30.264 00:03:30.264 Suite: nvme 00:03:30.264 Test: test_create_ctrlr ...passed 00:03:30.264 Test: test_reset_ctrlr ...[2024-07-25 02:28:16.984489] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.264 passed 00:03:30.264 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:30.264 Test: test_failover_ctrlr ...passed 00:03:30.264 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-25 02:28:16.985162] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.264 [2024-07-25 02:28:16.985215] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.264 [2024-07-25 02:28:16.985254] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.264 passed 00:03:30.264 Test: test_pending_reset ...[2024-07-25 02:28:16.985530] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.985601] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_attach_ctrlr ...[2024-07-25 02:28:16.985753] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:30.265 passed 00:03:30.265 Test: test_aer_cb ...passed 00:03:30.265 Test: test_submit_nvme_cmd ...passed 00:03:30.265 Test: test_add_remove_trid ...passed 00:03:30.265 Test: test_abort ...[2024-07-25 02:28:16.986257] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7480:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:30.265 passed 00:03:30.265 Test: test_get_io_qpair ...passed 00:03:30.265 Test: test_bdev_unregister ...passed 00:03:30.265 Test: test_compare_ns ...passed 00:03:30.265 Test: test_init_ana_log_page ...passed 00:03:30.265 Test: test_get_memory_domains ...passed 00:03:30.265 Test: test_reconnect_qpair ...[2024-07-25 02:28:16.986755] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_create_bdev_ctrlr ...[2024-07-25 02:28:16.986857] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5407:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:30.265 passed 00:03:30.265 Test: test_add_multi_ns_to_bdev ...[2024-07-25 02:28:16.987083] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4574:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:30.265 passed 00:03:30.265 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:30.265 Test: test_admin_path ...passed 00:03:30.265 Test: test_reset_bdev_ctrlr ...passed 00:03:30.265 Test: test_find_io_path ...passed 00:03:30.265 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:30.265 Test: test_retry_io_for_io_path_error ...passed 00:03:30.265 Test: test_retry_io_count ...passed 00:03:30.265 Test: test_concurrent_read_ana_log_page ...passed 00:03:30.265 Test: test_retry_io_for_ana_error ...passed 00:03:30.265 Test: test_check_io_error_resiliency_params ...[2024-07-25 02:28:16.988208] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:30.265 [2024-07-25 02:28:16.988240] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6108:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:30.265 [2024-07-25 02:28:16.988260] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6117:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:30.265 [2024-07-25 02:28:16.988278] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6120:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:30.265 [2024-07-25 02:28:16.988296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:30.265 [2024-07-25 02:28:16.988315] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6132:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:30.265 [2024-07-25 02:28:16.988333] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6112:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:30.265 [2024-07-25 02:28:16.988350] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6127:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:30.265 passed 00:03:30.265 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-25 02:28:16.988367] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6124:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:30.265 passed 00:03:30.265 Test: test_reconnect_ctrlr ...[2024-07-25 02:28:16.988506] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988544] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988603] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988634] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988668] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_retry_failover_ctrlr ...[2024-07-25 02:28:16.988743] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_fail_path ...[2024-07-25 02:28:16.988847] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988885] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988918] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988947] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:16.988977] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_nvme_ns_cmp ...passed 00:03:30.265 Test: test_ana_transition ...passed 00:03:30.265 Test: test_set_preferred_path ...passed 00:03:30.265 Test: test_find_next_io_path ...passed 00:03:30.265 Test: test_find_io_path_min_qd ...passed 00:03:30.265 Test: test_disable_auto_failback ...[2024-07-25 02:28:16.989282] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_set_multipath_policy ...passed 00:03:30.265 Test: test_uuid_generation ...passed 00:03:30.265 Test: test_retry_io_to_same_path ...passed 00:03:30.265 Test: test_race_between_reset_and_disconnected ...passed 00:03:30.265 Test: test_ctrlr_op_rpc ...passed 00:03:30.265 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:30.265 Test: test_disable_enable_ctrlr ...[2024-07-25 02:28:17.041835] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 [2024-07-25 02:28:17.041933] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.265 passed 00:03:30.265 Test: test_delete_ctrlr_done ...passed 00:03:30.265 Test: test_ns_remove_during_reset ...passed 00:03:30.265 Test: test_io_path_is_current ...passed 00:03:30.265 00:03:30.265 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.265 suites 1 1 n/a 0 0 00:03:30.265 tests 49 49 49 0 0 00:03:30.265 asserts 3578 3578 3578 0 n/a 00:03:30.265 00:03:30.265 Elapsed time = 0.023 seconds 00:03:30.265 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:30.265 00:03:30.265 00:03:30.265 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.265 http://cunit.sourceforge.net/ 00:03:30.265 00:03:30.265 Test Options 00:03:30.265 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:30.265 00:03:30.265 Suite: raid 00:03:30.265 Test: test_create_raid ...passed 00:03:30.265 Test: test_create_raid_superblock ...passed 00:03:30.265 Test: test_delete_raid ...passed 00:03:30.265 Test: test_create_raid_invalid_args ...[2024-07-25 02:28:17.057305] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1507:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:30.265 [2024-07-25 02:28:17.057771] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1501:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:30.265 [2024-07-25 02:28:17.057993] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1491:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:30.265 [2024-07-25 02:28:17.058067] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:30.265 [2024-07-25 02:28:17.058092] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:30.265 [2024-07-25 02:28:17.058397] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3283:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:30.265 [2024-07-25 02:28:17.058431] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3461:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:30.265 passed 00:03:30.265 Test: test_delete_raid_invalid_args ...passed 00:03:30.265 Test: test_io_channel ...passed 00:03:30.265 Test: test_reset_io ...passed 00:03:30.265 Test: test_multi_raid ...passed 00:03:30.265 Test: test_io_type_supported ...passed 00:03:30.265 Test: test_raid_json_dump_info ...passed 00:03:30.265 Test: test_context_size ...passed 00:03:30.265 Test: test_raid_level_conversions ...passed 00:03:30.265 Test: test_raid_io_split ...passed 00:03:30.265 Test: test_raid_process ...passed 00:03:30.265 Test: test_raid_process_with_qos ...passed 00:03:30.265 00:03:30.265 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.265 suites 1 1 n/a 0 0 00:03:30.265 tests 15 15 15 0 0 00:03:30.265 asserts 6602 6602 6602 0 n/a 00:03:30.265 00:03:30.265 Elapsed time = 0.008 seconds 00:03:30.265 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:30.265 00:03:30.265 00:03:30.265 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.265 http://cunit.sourceforge.net/ 00:03:30.265 00:03:30.265 00:03:30.265 Suite: raid_sb 00:03:30.265 Test: test_raid_bdev_write_superblock ...passed 00:03:30.265 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:30.265 Test: test_raid_bdev_parse_superblock ...[2024-07-25 02:28:17.073340] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:30.265 passed 00:03:30.265 Suite: raid_sb_md 00:03:30.265 Test: test_raid_bdev_write_superblock ...passed 00:03:30.265 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:30.265 Test: test_raid_bdev_parse_superblock ...[2024-07-25 02:28:17.073809] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:30.266 passed 00:03:30.266 Suite: raid_sb_md_interleaved 00:03:30.266 Test: test_raid_bdev_write_superblock ...passed 00:03:30.266 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:30.266 Test: test_raid_bdev_parse_superblock ...[2024-07-25 02:28:17.073990] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:30.266 passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 3 3 n/a 0 0 00:03:30.266 tests 9 9 9 0 0 00:03:30.266 asserts 139 139 139 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.008 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: concat 00:03:30.266 Test: test_concat_start ...passed 00:03:30.266 Test: test_concat_rw ...passed 00:03:30.266 Test: test_concat_null_payload ...passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 1 1 n/a 0 0 00:03:30.266 tests 3 3 3 0 0 00:03:30.266 asserts 8460 8460 8460 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.000 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: raid0 00:03:30.266 Test: test_write_io ...passed 00:03:30.266 Test: test_read_io ...passed 00:03:30.266 Test: test_unmap_io ...passed 00:03:30.266 Test: test_io_failure ...passed 00:03:30.266 Suite: raid0_dif 00:03:30.266 Test: test_write_io ...passed 00:03:30.266 Test: test_read_io ...passed 00:03:30.266 Test: test_unmap_io ...passed 00:03:30.266 Test: test_io_failure ...passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 2 2 n/a 0 0 00:03:30.266 tests 8 8 8 0 0 00:03:30.266 asserts 368291 368291 368291 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.023 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: raid1 00:03:30.266 Test: test_raid1_start ...passed 00:03:30.266 Test: test_raid1_read_balancing ...passed 00:03:30.266 Test: test_raid1_write_error ...passed 00:03:30.266 Test: test_raid1_read_error ...passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 1 1 n/a 0 0 00:03:30.266 tests 4 4 4 0 0 00:03:30.266 asserts 4374 4374 4374 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.000 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: zone 00:03:30.266 Test: test_zone_get_operation ...passed 00:03:30.266 Test: test_bdev_zone_get_info ...passed 00:03:30.266 Test: test_bdev_zone_management ...passed 00:03:30.266 Test: test_bdev_zone_append ...passed 00:03:30.266 Test: test_bdev_zone_append_with_md ...passed 00:03:30.266 Test: test_bdev_zone_appendv ...passed 00:03:30.266 Test: test_bdev_zone_appendv_with_md ...passed 00:03:30.266 Test: test_bdev_io_get_append_location ...passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 1 1 n/a 0 0 00:03:30.266 tests 8 8 8 0 0 00:03:30.266 asserts 94 94 94 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.000 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: gpt_parse 00:03:30.266 Test: test_parse_mbr_and_primary ...[2024-07-25 02:28:17.130345] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:30.266 [2024-07-25 02:28:17.130709] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:30.266 [2024-07-25 02:28:17.130776] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:30.266 [2024-07-25 02:28:17.130799] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:30.266 [2024-07-25 02:28:17.130823] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:30.266 [2024-07-25 02:28:17.130845] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:30.266 passed 00:03:30.266 Test: test_parse_secondary ...[2024-07-25 02:28:17.131168] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:30.266 [2024-07-25 02:28:17.131189] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:30.266 [2024-07-25 02:28:17.131212] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:30.266 [2024-07-25 02:28:17.131232] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:30.266 passed 00:03:30.266 Test: test_check_mbr ...[2024-07-25 02:28:17.131544] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:30.266 passed 00:03:30.266 Test: test_read_header ...[2024-07-25 02:28:17.131566] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:30.266 [2024-07-25 02:28:17.131595] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:30.266 [2024-07-25 02:28:17.131618] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:30.266 [2024-07-25 02:28:17.131639] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:30.266 [2024-07-25 02:28:17.131662] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:30.266 [2024-07-25 02:28:17.131685] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:30.266 [2024-07-25 02:28:17.131704] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:30.266 passed 00:03:30.266 Test: test_read_partitions ...[2024-07-25 02:28:17.131734] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:30.266 [2024-07-25 02:28:17.131755] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:30.266 [2024-07-25 02:28:17.131775] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:30.266 [2024-07-25 02:28:17.131794] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:30.266 [2024-07-25 02:28:17.131952] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:30.266 passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 1 1 n/a 0 0 00:03:30.266 tests 5 5 5 0 0 00:03:30.266 asserts 33 33 33 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.008 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: bdev_part 00:03:30.266 Test: part_test ...[2024-07-25 02:28:17.145853] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4633:bdev_name_add: *ERROR*: Bdev name ccac963d-c5b3-6054-a427-fd6f5a35fd56 already exists 00:03:30.266 [2024-07-25 02:28:17.146211] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:ccac963d-c5b3-6054-a427-fd6f5a35fd56 alias for bdev test1 00:03:30.266 passed 00:03:30.266 Test: part_free_test ...passed 00:03:30.266 Test: part_get_io_channel_test ...passed 00:03:30.266 Test: part_construct_ext ...passed 00:03:30.266 00:03:30.266 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.266 suites 1 1 n/a 0 0 00:03:30.266 tests 4 4 4 0 0 00:03:30.266 asserts 48 48 48 0 n/a 00:03:30.266 00:03:30.266 Elapsed time = 0.016 seconds 00:03:30.266 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:30.266 00:03:30.266 00:03:30.266 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.266 http://cunit.sourceforge.net/ 00:03:30.266 00:03:30.266 00:03:30.266 Suite: scsi_nvme_suite 00:03:30.266 Test: scsi_nvme_translate_test ...passed 00:03:30.266 00:03:30.267 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.267 suites 1 1 n/a 0 0 00:03:30.267 tests 1 1 1 0 0 00:03:30.267 asserts 104 104 104 0 n/a 00:03:30.267 00:03:30.267 Elapsed time = 0.000 seconds 00:03:30.267 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:30.538 00:03:30.538 00:03:30.538 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.538 http://cunit.sourceforge.net/ 00:03:30.538 00:03:30.538 00:03:30.538 Suite: lvol 00:03:30.538 Test: ut_lvs_init ...[2024-07-25 02:28:17.165700] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:30.538 [2024-07-25 02:28:17.166066] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_init ...passed 00:03:30.538 Test: ut_lvol_snapshot ...passed 00:03:30.538 Test: ut_lvol_clone ...passed 00:03:30.538 Test: ut_lvs_destroy ...passed 00:03:30.538 Test: ut_lvs_unload ...passed 00:03:30.538 Test: ut_lvol_resize ...[2024-07-25 02:28:17.166304] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_set_read_only ...passed 00:03:30.538 Test: ut_lvol_hotremove ...passed 00:03:30.538 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:30.538 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:30.538 Test: ut_lvol_read_write ...passed 00:03:30.538 Test: ut_vbdev_lvol_submit_request ...passed 00:03:30.538 Test: ut_lvol_examine_config ...passed 00:03:30.538 Test: ut_lvol_examine_disk ...[2024-07-25 02:28:17.166455] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_rename ...[2024-07-25 02:28:17.166545] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:30.538 [2024-07-25 02:28:17.166568] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:30.538 passed 00:03:30.538 Test: ut_bdev_finish ...passed 00:03:30.538 Test: ut_lvs_rename ...passed 00:03:30.538 Test: ut_lvol_seek ...passed 00:03:30.538 Test: ut_esnap_dev_create ...[2024-07-25 02:28:17.166688] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:30.538 [2024-07-25 02:28:17.166727] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:30.538 [2024-07-25 02:28:17.166792] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-25 02:28:17.166863] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:30.538 [2024-07-25 02:28:17.166891] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_shallow_copy ...[2024-07-25 02:28:17.166948] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:30.538 [2024-07-25 02:28:17.166969] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:30.538 passed 00:03:30.538 Test: ut_lvol_set_external_parent ...passed[2024-07-25 02:28:17.167005] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:30.538 00:03:30.538 00:03:30.538 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.538 suites 1 1 n/a 0 0 00:03:30.538 tests 23 23 23 0 0 00:03:30.538 asserts 770 770 770 0 n/a 00:03:30.538 00:03:30.538 Elapsed time = 0.008 seconds 00:03:30.538 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:30.538 00:03:30.538 00:03:30.538 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.538 http://cunit.sourceforge.net/ 00:03:30.538 00:03:30.538 00:03:30.538 Suite: zone_block 00:03:30.538 Test: test_zone_block_create ...passed 00:03:30.538 Test: test_zone_block_create_invalid ...[2024-07-25 02:28:17.182838] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:30.538 [2024-07-25 02:28:17.183077] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-25 02:28:17.183129] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:30.538 [2024-07-25 02:28:17.183144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File existspassed 00:03:30.538 Test: test_get_zone_info ...[2024-07-25 02:28:17.183160] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:30.538 [2024-07-25 02:28:17.183172] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-25 02:28:17.183185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:30.538 [2024-07-25 02:28:17.183196] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-25 02:28:17.183272] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183293] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_supported_io_types ...passed 00:03:30.539 Test: test_reset_zone ...[2024-07-25 02:28:17.183307] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183373] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183388] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_open_zone ...[2024-07-25 02:28:17.183461] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183748] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_zone_write ...[2024-07-25 02:28:17.183794] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:30.539 [2024-07-25 02:28:17.183806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.183822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:30.539 [2024-07-25 02:28:17.183833] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.184513] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:30.539 [2024-07-25 02:28:17.184543] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.184559] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:30.539 [2024-07-25 02:28:17.184570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185297] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:30.539 [2024-07-25 02:28:17.185323] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_zone_read ...[2024-07-25 02:28:17.185381] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:30.539 [2024-07-25 02:28:17.185395] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185411] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:30.539 [2024-07-25 02:28:17.185427] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185494] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:30.539 [2024-07-25 02:28:17.185509] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_close_zone ...[2024-07-25 02:28:17.185547] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185566] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185610] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_finish_zone ...[2024-07-25 02:28:17.185624] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185693] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185718] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 Test: test_append_zone ...[2024-07-25 02:28:17.185763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:30.539 [2024-07-25 02:28:17.185784] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 [2024-07-25 02:28:17.185801] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:30.539 [2024-07-25 02:28:17.185813] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 passed 00:03:30.539 00:03:30.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.539 suites 1 1 n/a 0 0 00:03:30.539 tests 11 11 11 0 0 00:03:30.539 asserts 3437 3437 3437 0 n/a 00:03:30.539 00:03:30.539 Elapsed time = 0.008 seconds 00:03:30.539 [2024-07-25 02:28:17.187069] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:30.539 [2024-07-25 02:28:17.187089] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:30.539 02:28:17 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:30.539 00:03:30.539 00:03:30.539 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.539 http://cunit.sourceforge.net/ 00:03:30.539 00:03:30.539 00:03:30.539 Suite: bdev 00:03:30.539 Test: basic ...[2024-07-25 02:28:17.199928] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b639): Operation not permitted (rc=-1) 00:03:30.539 [2024-07-25 02:28:17.200251] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x25c17806a480 (0x24b630): Operation not permitted (rc=-1) 00:03:30.539 [2024-07-25 02:28:17.200277] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b639): Operation not permitted (rc=-1) 00:03:30.539 passed 00:03:30.539 Test: unregister_and_close ...passed 00:03:30.539 Test: unregister_and_close_different_threads ...passed 00:03:30.539 Test: basic_qos ...passed 00:03:30.539 Test: put_channel_during_reset ...passed 00:03:30.539 Test: aborted_reset ...passed 00:03:30.539 Test: aborted_reset_no_outstanding_io ...passed 00:03:30.539 Test: io_during_reset ...passed 00:03:30.539 Test: reset_completions ...passed 00:03:30.539 Test: io_during_qos_queue ...passed 00:03:30.539 Test: io_during_qos_reset ...passed 00:03:30.539 Test: enomem ...passed 00:03:30.539 Test: enomem_multi_bdev ...passed 00:03:30.539 Test: enomem_multi_bdev_unregister ...passed 00:03:30.539 Test: enomem_multi_io_target ...passed 00:03:30.539 Test: qos_dynamic_enable ...passed 00:03:30.539 Test: bdev_histograms_mt ...passed 00:03:30.539 Test: bdev_set_io_timeout_mt ...passed 00:03:30.539 Test: lock_lba_range_then_submit_io ...[2024-07-25 02:28:17.233121] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x25c17806a600 not unregistered 00:03:30.539 [2024-07-25 02:28:17.233858] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b618 already registered (old:0x25c17806a600 new:0x25c17806a780) 00:03:30.539 passed 00:03:30.539 Test: unregister_during_reset ...passed 00:03:30.539 Test: event_notify_and_close ...passed 00:03:30.539 Test: unregister_and_qos_poller ...passed 00:03:30.539 Suite: bdev_wrong_thread 00:03:30.539 Test: spdk_bdev_register_wt ...[2024-07-25 02:28:17.238067] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8536:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x25c178033380 (0x25c178033380) 00:03:30.539 passed 00:03:30.539 Test: spdk_bdev_examine_wt ...passed[2024-07-25 02:28:17.238106] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x25c178033380 (0x25c178033380) 00:03:30.539 00:03:30.539 00:03:30.539 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.539 suites 2 2 n/a 0 0 00:03:30.539 tests 24 24 24 0 0 00:03:30.539 asserts 621 621 621 0 n/a 00:03:30.539 00:03:30.539 Elapsed time = 0.039 seconds 00:03:30.539 00:03:30.539 real 0m0.325s 00:03:30.539 user 0m0.209s 00:03:30.539 sys 0m0.076s 00:03:30.539 02:28:17 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.539 02:28:17 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:30.539 ************************************ 00:03:30.539 END TEST unittest_bdev 00:03:30.539 ************************************ 00:03:30.539 02:28:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:30.539 02:28:17 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.539 02:28:17 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.539 02:28:17 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.539 02:28:17 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.539 02:28:17 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:30.539 02:28:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.539 02:28:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.539 02:28:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.539 ************************************ 00:03:30.539 START TEST unittest_blob_blobfs 00:03:30.539 ************************************ 00:03:30.539 02:28:17 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:03:30.539 02:28:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:30.539 02:28:17 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:30.540 00:03:30.540 00:03:30.540 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.540 http://cunit.sourceforge.net/ 00:03:30.540 00:03:30.540 00:03:30.540 Suite: blob_nocopy_noextent 00:03:30.540 Test: blob_init ...[2024-07-25 02:28:17.310884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:30.540 passed 00:03:30.540 Test: blob_thin_provision ...passed 00:03:30.540 Test: blob_read_only ...passed 00:03:30.540 Test: bs_load ...passed 00:03:30.540 Test: bs_load_custom_cluster_size ...[2024-07-25 02:28:17.375777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:30.540 passed 00:03:30.540 Test: bs_load_after_failed_grow ...passed 00:03:30.540 Test: bs_cluster_sz ...[2024-07-25 02:28:17.394659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:30.540 [2024-07-25 02:28:17.394704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:30.540 [2024-07-25 02:28:17.394713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:30.540 passed 00:03:30.540 Test: bs_resize_md ...passed 00:03:30.540 Test: bs_destroy ...passed 00:03:30.817 Test: bs_type ...passed 00:03:30.817 Test: bs_super_block ...passed 00:03:30.817 Test: bs_test_recover_cluster_count ...passed 00:03:30.817 Test: bs_grow_live ...passed 00:03:30.817 Test: bs_grow_live_no_space ...passed 00:03:30.817 Test: bs_test_grow ...passed 00:03:30.817 Test: blob_serialize_test ...passed 00:03:30.817 Test: super_block_crc ...passed 00:03:30.817 Test: blob_thin_prov_write_count_io ...passed 00:03:30.817 Test: blob_thin_prov_unmap_cluster ...passed 00:03:30.817 Test: bs_load_iter_test ...passed 00:03:30.817 Test: blob_relations ...[2024-07-25 02:28:17.535512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.535571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 [2024-07-25 02:28:17.535630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.535641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 passed 00:03:30.817 Test: blob_relations2 ...[2024-07-25 02:28:17.545853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.545878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 [2024-07-25 02:28:17.545884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.545889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 [2024-07-25 02:28:17.545982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.545988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 [2024-07-25 02:28:17.546015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:30.817 [2024-07-25 02:28:17.546020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.817 passed 00:03:30.817 Test: blob_relations3 ...passed 00:03:30.817 Test: blobstore_clean_power_failure ...passed 00:03:30.817 Test: blob_delete_snapshot_power_failure ...[2024-07-25 02:28:17.676786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:30.817 [2024-07-25 02:28:17.686415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:30.817 [2024-07-25 02:28:17.686483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:30.817 [2024-07-25 02:28:17.686490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.818 [2024-07-25 02:28:17.696028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:30.818 [2024-07-25 02:28:17.696058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:30.818 [2024-07-25 02:28:17.696065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:30.818 [2024-07-25 02:28:17.696071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.818 [2024-07-25 02:28:17.705701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:30.818 [2024-07-25 02:28:17.705724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.078 [2024-07-25 02:28:17.715317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:31.078 [2024-07-25 02:28:17.715356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.078 [2024-07-25 02:28:17.724850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:31.078 [2024-07-25 02:28:17.724888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.078 passed 00:03:31.078 Test: blob_create_snapshot_power_failure ...[2024-07-25 02:28:17.753055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:31.078 [2024-07-25 02:28:17.771929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:31.078 [2024-07-25 02:28:17.781475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:31.078 passed 00:03:31.078 Test: blob_io_unit ...passed 00:03:31.078 Test: blob_io_unit_compatibility ...passed 00:03:31.078 Test: blob_ext_md_pages ...passed 00:03:31.078 Test: blob_esnap_io_4096_4096 ...passed 00:03:31.078 Test: blob_esnap_io_512_512 ...passed 00:03:31.078 Test: blob_esnap_io_4096_512 ...passed 00:03:31.078 Test: blob_esnap_io_512_4096 ...passed 00:03:31.078 Test: blob_esnap_clone_resize ...passed 00:03:31.078 Suite: blob_bs_nocopy_noextent 00:03:31.078 Test: blob_open ...passed 00:03:31.338 Test: blob_create ...[2024-07-25 02:28:17.978646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:31.338 passed 00:03:31.338 Test: blob_create_loop ...passed 00:03:31.338 Test: blob_create_fail ...[2024-07-25 02:28:18.045423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.338 passed 00:03:31.338 Test: blob_create_internal ...passed 00:03:31.338 Test: blob_create_zero_extent ...passed 00:03:31.338 Test: blob_snapshot ...passed 00:03:31.338 Test: blob_clone ...passed 00:03:31.338 Test: blob_inflate ...[2024-07-25 02:28:18.191006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:31.338 passed 00:03:31.338 Test: blob_delete ...passed 00:03:31.598 Test: blob_resize_test ...[2024-07-25 02:28:18.246743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:31.598 passed 00:03:31.598 Test: blob_resize_thin_test ...passed 00:03:31.598 Test: channel_ops ...passed 00:03:31.598 Test: blob_super ...passed 00:03:31.598 Test: blob_rw_verify_iov ...passed 00:03:31.598 Test: blob_unmap ...passed 00:03:31.598 Test: blob_iter ...passed 00:03:31.598 Test: blob_parse_md ...passed 00:03:31.598 Test: bs_load_pending_removal ...passed 00:03:31.858 Test: bs_unload ...[2024-07-25 02:28:18.497518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:31.858 passed 00:03:31.858 Test: bs_usable_clusters ...passed 00:03:31.858 Test: blob_crc ...[2024-07-25 02:28:18.552766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:31.858 [2024-07-25 02:28:18.552814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:31.858 passed 00:03:31.858 Test: blob_flags ...passed 00:03:31.858 Test: bs_version ...passed 00:03:31.858 Test: blob_set_xattrs_test ...[2024-07-25 02:28:18.636192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.858 [2024-07-25 02:28:18.636238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.858 passed 00:03:31.858 Test: blob_thin_prov_alloc ...passed 00:03:31.858 Test: blob_insert_cluster_msg_test ...passed 00:03:31.858 Test: blob_thin_prov_rw ...passed 00:03:32.118 Test: blob_thin_prov_rle ...passed 00:03:32.118 Test: blob_thin_prov_rw_iov ...passed 00:03:32.118 Test: blob_snapshot_rw ...passed 00:03:32.118 Test: blob_snapshot_rw_iov ...passed 00:03:32.118 Test: blob_inflate_rw ...passed 00:03:32.118 Test: blob_snapshot_freeze_io ...passed 00:03:32.118 Test: blob_operation_split_rw ...passed 00:03:32.378 Test: blob_operation_split_rw_iov ...passed 00:03:32.379 Test: blob_simultaneous_operations ...[2024-07-25 02:28:19.063380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.379 [2024-07-25 02:28:19.063459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.379 [2024-07-25 02:28:19.063694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.379 [2024-07-25 02:28:19.063706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.379 [2024-07-25 02:28:19.066776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.379 [2024-07-25 02:28:19.066803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.379 [2024-07-25 02:28:19.066823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.379 [2024-07-25 02:28:19.066829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.379 passed 00:03:32.379 Test: blob_persist_test ...passed 00:03:32.379 Test: blob_decouple_snapshot ...passed 00:03:32.379 Test: blob_seek_io_unit ...passed 00:03:32.379 Test: blob_nested_freezes ...passed 00:03:32.379 Test: blob_clone_resize ...passed 00:03:32.379 Test: blob_shallow_copy ...[2024-07-25 02:28:19.258491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:32.379 [2024-07-25 02:28:19.258547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:32.379 [2024-07-25 02:28:19.258554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:32.379 passed 00:03:32.379 Suite: blob_blob_nocopy_noextent 00:03:32.638 Test: blob_write ...passed 00:03:32.638 Test: blob_read ...passed 00:03:32.638 Test: blob_rw_verify ...passed 00:03:32.638 Test: blob_rw_verify_iov_nomem ...passed 00:03:32.638 Test: blob_rw_iov_read_only ...passed 00:03:32.638 Test: blob_xattr ...passed 00:03:32.638 Test: blob_dirty_shutdown ...passed 00:03:32.638 Test: blob_is_degraded ...passed 00:03:32.638 Suite: blob_esnap_bs_nocopy_noextent 00:03:32.638 Test: blob_esnap_create ...passed 00:03:32.897 Test: blob_esnap_thread_add_remove ...passed 00:03:32.897 Test: blob_esnap_clone_snapshot ...passed 00:03:32.897 Test: blob_esnap_clone_inflate ...passed 00:03:32.897 Test: blob_esnap_clone_decouple ...passed 00:03:32.897 Test: blob_esnap_clone_reload ...passed 00:03:32.897 Test: blob_esnap_hotplug ...passed 00:03:32.897 Test: blob_set_parent ...[2024-07-25 02:28:19.713462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:32.897 [2024-07-25 02:28:19.713537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:32.897 [2024-07-25 02:28:19.713552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:32.897 [2024-07-25 02:28:19.713560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:32.897 [2024-07-25 02:28:19.713602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:32.897 passed 00:03:32.897 Test: blob_set_external_parent ...[2024-07-25 02:28:19.741694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:32.897 [2024-07-25 02:28:19.741739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:32.897 [2024-07-25 02:28:19.741762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:32.897 [2024-07-25 02:28:19.741793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:32.897 passed 00:03:32.898 Suite: blob_nocopy_extent 00:03:32.898 Test: blob_init ...[2024-07-25 02:28:19.751295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:32.898 passed 00:03:32.898 Test: blob_thin_provision ...passed 00:03:32.898 Test: blob_read_only ...passed 00:03:32.898 Test: bs_load ...passed 00:03:32.898 Test: bs_load_custom_cluster_size ...[2024-07-25 02:28:19.788644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:33.155 passed 00:03:33.155 Test: bs_load_after_failed_grow ...passed 00:03:33.155 Test: bs_cluster_sz ...[2024-07-25 02:28:19.807603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:33.155 [2024-07-25 02:28:19.807668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:33.155 [2024-07-25 02:28:19.807677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:33.155 passed 00:03:33.155 Test: bs_resize_md ...passed 00:03:33.155 Test: bs_destroy ...passed 00:03:33.155 Test: bs_type ...passed 00:03:33.155 Test: bs_super_block ...passed 00:03:33.155 Test: bs_test_recover_cluster_count ...passed 00:03:33.155 Test: bs_grow_live ...passed 00:03:33.155 Test: bs_grow_live_no_space ...passed 00:03:33.155 Test: bs_test_grow ...passed 00:03:33.155 Test: blob_serialize_test ...passed 00:03:33.155 Test: super_block_crc ...passed 00:03:33.155 Test: blob_thin_prov_write_count_io ...passed 00:03:33.155 Test: blob_thin_prov_unmap_cluster ...passed 00:03:33.155 Test: bs_load_iter_test ...passed 00:03:33.155 Test: blob_relations ...[2024-07-25 02:28:19.946971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.947046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 [2024-07-25 02:28:19.947120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.947127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 passed 00:03:33.155 Test: blob_relations2 ...[2024-07-25 02:28:19.957280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.957319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 [2024-07-25 02:28:19.957325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.957347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 [2024-07-25 02:28:19.957433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.957439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 [2024-07-25 02:28:19.957467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.155 [2024-07-25 02:28:19.957472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.155 passed 00:03:33.155 Test: blob_relations3 ...passed 00:03:33.413 Test: blobstore_clean_power_failure ...passed 00:03:33.413 Test: blob_delete_snapshot_power_failure ...[2024-07-25 02:28:20.087901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.413 [2024-07-25 02:28:20.097299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.413 [2024-07-25 02:28:20.106724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.413 [2024-07-25 02:28:20.106768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.413 [2024-07-25 02:28:20.106791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 [2024-07-25 02:28:20.116209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.413 [2024-07-25 02:28:20.116248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:33.413 [2024-07-25 02:28:20.116254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.413 [2024-07-25 02:28:20.116276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 [2024-07-25 02:28:20.125881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.413 [2024-07-25 02:28:20.125907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:33.413 [2024-07-25 02:28:20.125914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.413 [2024-07-25 02:28:20.125920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 [2024-07-25 02:28:20.135344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:33.413 [2024-07-25 02:28:20.135376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 [2024-07-25 02:28:20.144785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:33.413 [2024-07-25 02:28:20.144824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 [2024-07-25 02:28:20.154210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:33.413 [2024-07-25 02:28:20.154248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.413 passed 00:03:33.413 Test: blob_create_snapshot_power_failure ...[2024-07-25 02:28:20.182143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.413 [2024-07-25 02:28:20.191516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:33.413 [2024-07-25 02:28:20.210165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:33.413 [2024-07-25 02:28:20.219512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:33.413 passed 00:03:33.413 Test: blob_io_unit ...passed 00:03:33.413 Test: blob_io_unit_compatibility ...passed 00:03:33.413 Test: blob_ext_md_pages ...passed 00:03:33.413 Test: blob_esnap_io_4096_4096 ...passed 00:03:33.671 Test: blob_esnap_io_512_512 ...passed 00:03:33.671 Test: blob_esnap_io_4096_512 ...passed 00:03:33.671 Test: blob_esnap_io_512_4096 ...passed 00:03:33.671 Test: blob_esnap_clone_resize ...passed 00:03:33.671 Suite: blob_bs_nocopy_extent 00:03:33.671 Test: blob_open ...passed 00:03:33.671 Test: blob_create ...[2024-07-25 02:28:20.416654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:33.671 passed 00:03:33.671 Test: blob_create_loop ...passed 00:03:33.671 Test: blob_create_fail ...[2024-07-25 02:28:20.483329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:33.671 passed 00:03:33.671 Test: blob_create_internal ...passed 00:03:33.671 Test: blob_create_zero_extent ...passed 00:03:33.929 Test: blob_snapshot ...passed 00:03:33.929 Test: blob_clone ...passed 00:03:33.929 Test: blob_inflate ...[2024-07-25 02:28:20.627470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:33.929 passed 00:03:33.929 Test: blob_delete ...passed 00:03:33.929 Test: blob_resize_test ...[2024-07-25 02:28:20.683060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:33.929 passed 00:03:33.929 Test: blob_resize_thin_test ...passed 00:03:33.929 Test: channel_ops ...passed 00:03:33.929 Test: blob_super ...passed 00:03:33.929 Test: blob_rw_verify_iov ...passed 00:03:34.188 Test: blob_unmap ...passed 00:03:34.188 Test: blob_iter ...passed 00:03:34.188 Test: blob_parse_md ...passed 00:03:34.188 Test: bs_load_pending_removal ...passed 00:03:34.188 Test: bs_unload ...[2024-07-25 02:28:20.933539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:34.188 passed 00:03:34.188 Test: bs_usable_clusters ...passed 00:03:34.188 Test: blob_crc ...[2024-07-25 02:28:20.988619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.188 [2024-07-25 02:28:20.988664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.188 passed 00:03:34.188 Test: blob_flags ...passed 00:03:34.188 Test: bs_version ...passed 00:03:34.188 Test: blob_set_xattrs_test ...[2024-07-25 02:28:21.071727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.188 [2024-07-25 02:28:21.071782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.188 passed 00:03:34.446 Test: blob_thin_prov_alloc ...passed 00:03:34.446 Test: blob_insert_cluster_msg_test ...passed 00:03:34.446 Test: blob_thin_prov_rw ...passed 00:03:34.446 Test: blob_thin_prov_rle ...passed 00:03:34.446 Test: blob_thin_prov_rw_iov ...passed 00:03:34.446 Test: blob_snapshot_rw ...passed 00:03:34.446 Test: blob_snapshot_rw_iov ...passed 00:03:34.704 Test: blob_inflate_rw ...passed 00:03:34.704 Test: blob_snapshot_freeze_io ...passed 00:03:34.704 Test: blob_operation_split_rw ...passed 00:03:34.704 Test: blob_operation_split_rw_iov ...passed 00:03:34.704 Test: blob_simultaneous_operations ...[2024-07-25 02:28:21.483502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.704 [2024-07-25 02:28:21.483571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.704 [2024-07-25 02:28:21.483810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.704 [2024-07-25 02:28:21.483823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.704 [2024-07-25 02:28:21.486768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.704 [2024-07-25 02:28:21.486788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.704 [2024-07-25 02:28:21.486801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.704 [2024-07-25 02:28:21.486806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.704 passed 00:03:34.704 Test: blob_persist_test ...passed 00:03:34.704 Test: blob_decouple_snapshot ...passed 00:03:34.704 Test: blob_seek_io_unit ...passed 00:03:34.964 Test: blob_nested_freezes ...passed 00:03:34.964 Test: blob_clone_resize ...passed 00:03:34.964 Test: blob_shallow_copy ...[2024-07-25 02:28:21.675462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:34.964 [2024-07-25 02:28:21.675516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:34.964 [2024-07-25 02:28:21.675523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:34.964 passed 00:03:34.964 Suite: blob_blob_nocopy_extent 00:03:34.964 Test: blob_write ...passed 00:03:34.964 Test: blob_read ...passed 00:03:34.964 Test: blob_rw_verify ...passed 00:03:34.964 Test: blob_rw_verify_iov_nomem ...passed 00:03:34.964 Test: blob_rw_iov_read_only ...passed 00:03:34.964 Test: blob_xattr ...passed 00:03:35.224 Test: blob_dirty_shutdown ...passed 00:03:35.224 Test: blob_is_degraded ...passed 00:03:35.224 Suite: blob_esnap_bs_nocopy_extent 00:03:35.224 Test: blob_esnap_create ...passed 00:03:35.224 Test: blob_esnap_thread_add_remove ...passed 00:03:35.224 Test: blob_esnap_clone_snapshot ...passed 00:03:35.224 Test: blob_esnap_clone_inflate ...passed 00:03:35.224 Test: blob_esnap_clone_decouple ...passed 00:03:35.224 Test: blob_esnap_clone_reload ...passed 00:03:35.224 Test: blob_esnap_hotplug ...passed 00:03:35.484 Test: blob_set_parent ...[2024-07-25 02:28:22.128338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:35.484 [2024-07-25 02:28:22.128392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:35.484 [2024-07-25 02:28:22.128409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:35.484 [2024-07-25 02:28:22.128416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:35.484 [2024-07-25 02:28:22.128462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:35.484 passed 00:03:35.484 Test: blob_set_external_parent ...[2024-07-25 02:28:22.156514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:35.484 [2024-07-25 02:28:22.156549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:35.484 [2024-07-25 02:28:22.156555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:35.484 [2024-07-25 02:28:22.156585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:35.484 passed 00:03:35.484 Suite: blob_copy_noextent 00:03:35.484 Test: blob_init ...[2024-07-25 02:28:22.165998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:35.484 passed 00:03:35.484 Test: blob_thin_provision ...passed 00:03:35.484 Test: blob_read_only ...passed 00:03:35.484 Test: bs_load ...passed 00:03:35.484 Test: bs_load_custom_cluster_size ...[2024-07-25 02:28:22.202950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:35.484 passed 00:03:35.484 Test: bs_load_after_failed_grow ...passed 00:03:35.484 Test: bs_cluster_sz ...[2024-07-25 02:28:22.221549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:35.484 [2024-07-25 02:28:22.221589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:35.484 [2024-07-25 02:28:22.221598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:35.484 passed 00:03:35.484 Test: bs_resize_md ...passed 00:03:35.484 Test: bs_destroy ...passed 00:03:35.484 Test: bs_type ...passed 00:03:35.484 Test: bs_super_block ...passed 00:03:35.484 Test: bs_test_recover_cluster_count ...passed 00:03:35.484 Test: bs_grow_live ...passed 00:03:35.484 Test: bs_grow_live_no_space ...passed 00:03:35.484 Test: bs_test_grow ...passed 00:03:35.484 Test: blob_serialize_test ...passed 00:03:35.484 Test: super_block_crc ...passed 00:03:35.484 Test: blob_thin_prov_write_count_io ...passed 00:03:35.484 Test: blob_thin_prov_unmap_cluster ...passed 00:03:35.484 Test: bs_load_iter_test ...passed 00:03:35.484 Test: blob_relations ...[2024-07-25 02:28:22.356899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.484 [2024-07-25 02:28:22.356945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.484 [2024-07-25 02:28:22.357003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.484 [2024-07-25 02:28:22.357009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.484 passed 00:03:35.484 Test: blob_relations2 ...[2024-07-25 02:28:22.366909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.484 [2024-07-25 02:28:22.366930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.484 [2024-07-25 02:28:22.366947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.484 [2024-07-25 02:28:22.366952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.485 [2024-07-25 02:28:22.367038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.485 [2024-07-25 02:28:22.367044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.485 [2024-07-25 02:28:22.367067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.485 [2024-07-25 02:28:22.367072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.485 passed 00:03:35.485 Test: blob_relations3 ...passed 00:03:35.743 Test: blobstore_clean_power_failure ...passed 00:03:35.743 Test: blob_delete_snapshot_power_failure ...[2024-07-25 02:28:22.497244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:35.743 [2024-07-25 02:28:22.506625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:35.743 [2024-07-25 02:28:22.506653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:35.743 [2024-07-25 02:28:22.506675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.743 [2024-07-25 02:28:22.516062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:35.743 [2024-07-25 02:28:22.516082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:35.743 [2024-07-25 02:28:22.516087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:35.743 [2024-07-25 02:28:22.516092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.743 [2024-07-25 02:28:22.525435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:35.743 [2024-07-25 02:28:22.525456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.743 [2024-07-25 02:28:22.534890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:35.743 [2024-07-25 02:28:22.534918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.743 [2024-07-25 02:28:22.544349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:35.743 [2024-07-25 02:28:22.544376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.743 passed 00:03:35.743 Test: blob_create_snapshot_power_failure ...[2024-07-25 02:28:22.572620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:35.743 [2024-07-25 02:28:22.591439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:35.743 [2024-07-25 02:28:22.600852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:35.743 passed 00:03:35.743 Test: blob_io_unit ...passed 00:03:36.000 Test: blob_io_unit_compatibility ...passed 00:03:36.000 Test: blob_ext_md_pages ...passed 00:03:36.000 Test: blob_esnap_io_4096_4096 ...passed 00:03:36.000 Test: blob_esnap_io_512_512 ...passed 00:03:36.000 Test: blob_esnap_io_4096_512 ...passed 00:03:36.000 Test: blob_esnap_io_512_4096 ...passed 00:03:36.000 Test: blob_esnap_clone_resize ...passed 00:03:36.000 Suite: blob_bs_copy_noextent 00:03:36.000 Test: blob_open ...passed 00:03:36.000 Test: blob_create ...[2024-07-25 02:28:22.797075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:36.000 passed 00:03:36.000 Test: blob_create_loop ...passed 00:03:36.000 Test: blob_create_fail ...[2024-07-25 02:28:22.863328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:36.000 passed 00:03:36.260 Test: blob_create_internal ...passed 00:03:36.260 Test: blob_create_zero_extent ...passed 00:03:36.260 Test: blob_snapshot ...passed 00:03:36.260 Test: blob_clone ...passed 00:03:36.260 Test: blob_inflate ...[2024-07-25 02:28:23.005003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:36.260 passed 00:03:36.260 Test: blob_delete ...passed 00:03:36.260 Test: blob_resize_test ...[2024-07-25 02:28:23.058528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:36.260 passed 00:03:36.260 Test: blob_resize_thin_test ...passed 00:03:36.260 Test: channel_ops ...passed 00:03:36.260 Test: blob_super ...passed 00:03:36.519 Test: blob_rw_verify_iov ...passed 00:03:36.519 Test: blob_unmap ...passed 00:03:36.519 Test: blob_iter ...passed 00:03:36.519 Test: blob_parse_md ...passed 00:03:36.519 Test: bs_load_pending_removal ...passed 00:03:36.519 Test: bs_unload ...[2024-07-25 02:28:23.308662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:36.519 passed 00:03:36.519 Test: bs_usable_clusters ...passed 00:03:36.519 Test: blob_crc ...[2024-07-25 02:28:23.364656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:36.519 [2024-07-25 02:28:23.364702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:36.519 passed 00:03:36.519 Test: blob_flags ...passed 00:03:36.778 Test: bs_version ...passed 00:03:36.778 Test: blob_set_xattrs_test ...[2024-07-25 02:28:23.447950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:36.778 [2024-07-25 02:28:23.448013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:36.778 passed 00:03:36.778 Test: blob_thin_prov_alloc ...passed 00:03:36.778 Test: blob_insert_cluster_msg_test ...passed 00:03:36.778 Test: blob_thin_prov_rw ...passed 00:03:36.778 Test: blob_thin_prov_rle ...passed 00:03:36.778 Test: blob_thin_prov_rw_iov ...passed 00:03:36.778 Test: blob_snapshot_rw ...passed 00:03:36.778 Test: blob_snapshot_rw_iov ...passed 00:03:37.037 Test: blob_inflate_rw ...passed 00:03:37.037 Test: blob_snapshot_freeze_io ...passed 00:03:37.037 Test: blob_operation_split_rw ...passed 00:03:37.037 Test: blob_operation_split_rw_iov ...passed 00:03:37.037 Test: blob_simultaneous_operations ...[2024-07-25 02:28:23.857183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.037 [2024-07-25 02:28:23.857255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.037 [2024-07-25 02:28:23.857484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.037 [2024-07-25 02:28:23.857497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.037 [2024-07-25 02:28:23.859490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.037 [2024-07-25 02:28:23.859511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.037 [2024-07-25 02:28:23.859524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.037 [2024-07-25 02:28:23.859529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.037 passed 00:03:37.037 Test: blob_persist_test ...passed 00:03:37.296 Test: blob_decouple_snapshot ...passed 00:03:37.296 Test: blob_seek_io_unit ...passed 00:03:37.296 Test: blob_nested_freezes ...passed 00:03:37.296 Test: blob_clone_resize ...passed 00:03:37.296 Test: blob_shallow_copy ...[2024-07-25 02:28:24.041417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:37.296 [2024-07-25 02:28:24.041487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:37.296 [2024-07-25 02:28:24.041494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:37.296 passed 00:03:37.296 Suite: blob_blob_copy_noextent 00:03:37.296 Test: blob_write ...passed 00:03:37.296 Test: blob_read ...passed 00:03:37.296 Test: blob_rw_verify ...passed 00:03:37.296 Test: blob_rw_verify_iov_nomem ...passed 00:03:37.555 Test: blob_rw_iov_read_only ...passed 00:03:37.555 Test: blob_xattr ...passed 00:03:37.555 Test: blob_dirty_shutdown ...passed 00:03:37.555 Test: blob_is_degraded ...passed 00:03:37.555 Suite: blob_esnap_bs_copy_noextent 00:03:37.555 Test: blob_esnap_create ...passed 00:03:37.555 Test: blob_esnap_thread_add_remove ...passed 00:03:37.555 Test: blob_esnap_clone_snapshot ...passed 00:03:37.555 Test: blob_esnap_clone_inflate ...passed 00:03:37.555 Test: blob_esnap_clone_decouple ...passed 00:03:37.555 Test: blob_esnap_clone_reload ...passed 00:03:37.814 Test: blob_esnap_hotplug ...passed 00:03:37.814 Test: blob_set_parent ...[2024-07-25 02:28:24.492481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:37.814 [2024-07-25 02:28:24.492533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:37.815 [2024-07-25 02:28:24.492547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:37.815 [2024-07-25 02:28:24.492553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:37.815 [2024-07-25 02:28:24.492606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:37.815 passed 00:03:37.815 Test: blob_set_external_parent ...[2024-07-25 02:28:24.520442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:37.815 [2024-07-25 02:28:24.520475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:37.815 [2024-07-25 02:28:24.520497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:37.815 [2024-07-25 02:28:24.520526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:37.815 passed 00:03:37.815 Suite: blob_copy_extent 00:03:37.815 Test: blob_init ...[2024-07-25 02:28:24.529877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:37.815 passed 00:03:37.815 Test: blob_thin_provision ...passed 00:03:37.815 Test: blob_read_only ...passed 00:03:37.815 Test: bs_load ...passed 00:03:37.815 Test: bs_load_custom_cluster_size ...[2024-07-25 02:28:24.566845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:37.815 passed 00:03:37.815 Test: bs_load_after_failed_grow ...passed 00:03:37.815 Test: bs_cluster_sz ...[2024-07-25 02:28:24.585550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:37.815 [2024-07-25 02:28:24.585616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:37.815 [2024-07-25 02:28:24.585625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:37.815 passed 00:03:37.815 Test: bs_resize_md ...passed 00:03:37.815 Test: bs_destroy ...passed 00:03:37.815 Test: bs_type ...passed 00:03:37.815 Test: bs_super_block ...passed 00:03:37.815 Test: bs_test_recover_cluster_count ...passed 00:03:37.815 Test: bs_grow_live ...passed 00:03:37.815 Test: bs_grow_live_no_space ...passed 00:03:37.815 Test: bs_test_grow ...passed 00:03:37.815 Test: blob_serialize_test ...passed 00:03:37.815 Test: super_block_crc ...passed 00:03:37.815 Test: blob_thin_prov_write_count_io ...passed 00:03:37.815 Test: blob_thin_prov_unmap_cluster ...passed 00:03:38.074 Test: bs_load_iter_test ...passed 00:03:38.074 Test: blob_relations ...[2024-07-25 02:28:24.724820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.724889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.724952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.724958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 passed 00:03:38.074 Test: blob_relations2 ...[2024-07-25 02:28:24.735258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.735293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.735299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.735322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.735415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.735422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.735450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8387:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.074 [2024-07-25 02:28:24.735455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 passed 00:03:38.074 Test: blob_relations3 ...passed 00:03:38.074 Test: blobstore_clean_power_failure ...passed 00:03:38.074 Test: blob_delete_snapshot_power_failure ...[2024-07-25 02:28:24.865944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.074 [2024-07-25 02:28:24.875371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.074 [2024-07-25 02:28:24.884814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:38.074 [2024-07-25 02:28:24.884860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.074 [2024-07-25 02:28:24.884883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.894244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.074 [2024-07-25 02:28:24.894273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:38.074 [2024-07-25 02:28:24.894279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.074 [2024-07-25 02:28:24.894301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.074 [2024-07-25 02:28:24.903743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.074 [2024-07-25 02:28:24.903767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:38.075 [2024-07-25 02:28:24.903772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8301:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.075 [2024-07-25 02:28:24.903778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.075 [2024-07-25 02:28:24.913161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8228:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:38.075 [2024-07-25 02:28:24.913185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.075 [2024-07-25 02:28:24.922538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8097:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:38.075 [2024-07-25 02:28:24.922576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.075 [2024-07-25 02:28:24.931997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8041:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:38.075 [2024-07-25 02:28:24.932035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.075 passed 00:03:38.075 Test: blob_create_snapshot_power_failure ...[2024-07-25 02:28:24.959929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:38.075 [2024-07-25 02:28:24.969339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.337 [2024-07-25 02:28:24.987983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.337 [2024-07-25 02:28:24.997363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:38.337 passed 00:03:38.337 Test: blob_io_unit ...passed 00:03:38.337 Test: blob_io_unit_compatibility ...passed 00:03:38.337 Test: blob_ext_md_pages ...passed 00:03:38.337 Test: blob_esnap_io_4096_4096 ...passed 00:03:38.337 Test: blob_esnap_io_512_512 ...passed 00:03:38.337 Test: blob_esnap_io_4096_512 ...passed 00:03:38.337 Test: blob_esnap_io_512_4096 ...passed 00:03:38.337 Test: blob_esnap_clone_resize ...passed 00:03:38.337 Suite: blob_bs_copy_extent 00:03:38.337 Test: blob_open ...passed 00:03:38.337 Test: blob_create ...[2024-07-25 02:28:25.194987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:38.337 passed 00:03:38.598 Test: blob_create_loop ...passed 00:03:38.598 Test: blob_create_fail ...[2024-07-25 02:28:25.263759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:38.598 passed 00:03:38.598 Test: blob_create_internal ...passed 00:03:38.598 Test: blob_create_zero_extent ...passed 00:03:38.598 Test: blob_snapshot ...passed 00:03:38.598 Test: blob_clone ...passed 00:03:38.598 Test: blob_inflate ...[2024-07-25 02:28:25.408875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:38.598 passed 00:03:38.598 Test: blob_delete ...passed 00:03:38.598 Test: blob_resize_test ...[2024-07-25 02:28:25.464362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7846:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:38.598 passed 00:03:38.857 Test: blob_resize_thin_test ...passed 00:03:38.857 Test: channel_ops ...passed 00:03:38.857 Test: blob_super ...passed 00:03:38.857 Test: blob_rw_verify_iov ...passed 00:03:38.857 Test: blob_unmap ...passed 00:03:38.857 Test: blob_iter ...passed 00:03:38.857 Test: blob_parse_md ...passed 00:03:38.857 Test: bs_load_pending_removal ...passed 00:03:38.857 Test: bs_unload ...[2024-07-25 02:28:25.715190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:38.857 passed 00:03:38.857 Test: bs_usable_clusters ...passed 00:03:39.116 Test: blob_crc ...[2024-07-25 02:28:25.770719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:39.116 [2024-07-25 02:28:25.770770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:39.116 passed 00:03:39.116 Test: blob_flags ...passed 00:03:39.116 Test: bs_version ...passed 00:03:39.116 Test: blob_set_xattrs_test ...[2024-07-25 02:28:25.853962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:39.116 [2024-07-25 02:28:25.854010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:39.116 passed 00:03:39.116 Test: blob_thin_prov_alloc ...passed 00:03:39.116 Test: blob_insert_cluster_msg_test ...passed 00:03:39.116 Test: blob_thin_prov_rw ...passed 00:03:39.116 Test: blob_thin_prov_rle ...passed 00:03:39.116 Test: blob_thin_prov_rw_iov ...passed 00:03:39.376 Test: blob_snapshot_rw ...passed 00:03:39.376 Test: blob_snapshot_rw_iov ...passed 00:03:39.376 Test: blob_inflate_rw ...passed 00:03:39.376 Test: blob_snapshot_freeze_io ...passed 00:03:39.376 Test: blob_operation_split_rw ...passed 00:03:39.376 Test: blob_operation_split_rw_iov ...passed 00:03:39.636 Test: blob_simultaneous_operations ...[2024-07-25 02:28:26.276891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:39.636 [2024-07-25 02:28:26.276948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:39.636 [2024-07-25 02:28:26.277201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:39.636 [2024-07-25 02:28:26.277223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:39.636 [2024-07-25 02:28:26.279162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:39.636 [2024-07-25 02:28:26.279185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:39.636 [2024-07-25 02:28:26.279200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8414:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:39.636 [2024-07-25 02:28:26.279206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8354:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:39.636 passed 00:03:39.636 Test: blob_persist_test ...passed 00:03:39.636 Test: blob_decouple_snapshot ...passed 00:03:39.636 Test: blob_seek_io_unit ...passed 00:03:39.636 Test: blob_nested_freezes ...passed 00:03:39.636 Test: blob_clone_resize ...passed 00:03:39.636 Test: blob_shallow_copy ...[2024-07-25 02:28:26.464644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:39.636 [2024-07-25 02:28:26.464705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:39.636 [2024-07-25 02:28:26.464713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:39.636 passed 00:03:39.636 Suite: blob_blob_copy_extent 00:03:39.636 Test: blob_write ...passed 00:03:39.896 Test: blob_read ...passed 00:03:39.896 Test: blob_rw_verify ...passed 00:03:39.896 Test: blob_rw_verify_iov_nomem ...passed 00:03:39.896 Test: blob_rw_iov_read_only ...passed 00:03:39.896 Test: blob_xattr ...passed 00:03:39.896 Test: blob_dirty_shutdown ...passed 00:03:39.896 Test: blob_is_degraded ...passed 00:03:39.896 Suite: blob_esnap_bs_copy_extent 00:03:39.896 Test: blob_esnap_create ...passed 00:03:39.896 Test: blob_esnap_thread_add_remove ...passed 00:03:39.896 Test: blob_esnap_clone_snapshot ...passed 00:03:40.156 Test: blob_esnap_clone_inflate ...passed 00:03:40.156 Test: blob_esnap_clone_decouple ...passed 00:03:40.156 Test: blob_esnap_clone_reload ...passed 00:03:40.156 Test: blob_esnap_hotplug ...passed 00:03:40.156 Test: blob_set_parent ...[2024-07-25 02:28:26.916389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:40.156 [2024-07-25 02:28:26.916446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:40.156 [2024-07-25 02:28:26.916464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:40.156 [2024-07-25 02:28:26.916487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:40.156 [2024-07-25 02:28:26.916527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:40.156 passed 00:03:40.156 Test: blob_set_external_parent ...[2024-07-25 02:28:26.944204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7788:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:40.156 [2024-07-25 02:28:26.944245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7797:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:40.156 [2024-07-25 02:28:26.944252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7749:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:40.156 [2024-07-25 02:28:26.944300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7755:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:40.156 passed 00:03:40.156 00:03:40.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.156 suites 16 16 n/a 0 0 00:03:40.156 tests 376 376 376 0 0 00:03:40.156 asserts 143973 143973 143973 0 n/a 00:03:40.156 00:03:40.156 Elapsed time = 9.648 seconds 00:03:40.156 02:28:26 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:40.156 00:03:40.156 00:03:40.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.156 http://cunit.sourceforge.net/ 00:03:40.156 00:03:40.156 00:03:40.156 Suite: blob_bdev 00:03:40.156 Test: create_bs_dev ...passed 00:03:40.156 Test: create_bs_dev_ro ...[2024-07-25 02:28:26.966105] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:40.156 passed 00:03:40.156 Test: create_bs_dev_rw ...passed 00:03:40.156 Test: claim_bs_dev ...[2024-07-25 02:28:26.966520] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:40.156 passed 00:03:40.156 Test: claim_bs_dev_ro ...passed 00:03:40.156 Test: deferred_destroy_refs ...passed 00:03:40.156 Test: deferred_destroy_channels ...passed 00:03:40.156 Test: deferred_destroy_threads ...passed 00:03:40.156 00:03:40.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.156 suites 1 1 n/a 0 0 00:03:40.156 tests 8 8 8 0 0 00:03:40.156 asserts 119 119 119 0 n/a 00:03:40.156 00:03:40.156 Elapsed time = 0.000 seconds 00:03:40.156 02:28:26 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:40.156 00:03:40.156 00:03:40.156 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.156 http://cunit.sourceforge.net/ 00:03:40.156 00:03:40.156 00:03:40.156 Suite: tree 00:03:40.156 Test: blobfs_tree_op_test ...passed 00:03:40.156 00:03:40.156 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.156 suites 1 1 n/a 0 0 00:03:40.156 tests 1 1 1 0 0 00:03:40.156 asserts 27 27 27 0 n/a 00:03:40.156 00:03:40.156 Elapsed time = 0.000 seconds 00:03:40.156 02:28:26 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:40.156 00:03:40.156 00:03:40.157 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.157 http://cunit.sourceforge.net/ 00:03:40.157 00:03:40.157 00:03:40.157 Suite: blobfs_async_ut 00:03:40.157 Test: fs_init ...passed 00:03:40.157 Test: fs_open ...passed 00:03:40.157 Test: fs_create ...passed 00:03:40.417 Test: fs_truncate ...passed 00:03:40.417 Test: fs_rename ...passed 00:03:40.417 Test: fs_rw_async ...[2024-07-25 02:28:27.070330] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:40.417 passed 00:03:40.417 Test: fs_writev_readv_async ...passed 00:03:40.417 Test: tree_find_buffer_ut ...passed 00:03:40.417 Test: channel_ops ...passed 00:03:40.417 Test: channel_ops_sync ...passed 00:03:40.417 00:03:40.417 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.417 suites 1 1 n/a 0 0 00:03:40.417 tests 10 10 10 0 0 00:03:40.417 asserts 292 292 292 0 n/a 00:03:40.417 00:03:40.417 Elapsed time = 0.117 seconds 00:03:40.417 02:28:27 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:40.417 00:03:40.417 00:03:40.417 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.417 http://cunit.sourceforge.net/ 00:03:40.417 00:03:40.417 00:03:40.417 Suite: blobfs_sync_ut 00:03:40.417 Test: cache_read_after_write ...passed 00:03:40.417 Test: file_length ...[2024-07-25 02:28:27.168899] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:40.417 passed 00:03:40.417 Test: append_write_to_extend_blob ...passed 00:03:40.417 Test: partial_buffer ...passed 00:03:40.417 Test: cache_write_null_buffer ...passed 00:03:40.417 Test: fs_create_sync ...passed 00:03:40.417 Test: fs_rename_sync ...passed 00:03:40.417 Test: cache_append_no_cache ...passed 00:03:40.417 Test: fs_delete_file_without_close ...passed 00:03:40.417 00:03:40.417 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.417 suites 1 1 n/a 0 0 00:03:40.417 tests 9 9 9 0 0 00:03:40.417 asserts 345 345 345 0 n/a 00:03:40.417 00:03:40.417 Elapsed time = 0.250 seconds 00:03:40.417 02:28:27 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:40.417 00:03:40.417 00:03:40.417 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.417 http://cunit.sourceforge.net/ 00:03:40.417 00:03:40.417 00:03:40.417 Suite: blobfs_bdev_ut 00:03:40.417 Test: spdk_blobfs_bdev_detect_test ...[2024-07-25 02:28:27.253679] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:40.417 passed 00:03:40.417 Test: spdk_blobfs_bdev_create_test ...passed 00:03:40.417 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:40.417 00:03:40.417 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.417 suites 1 1 n/a 0 0 00:03:40.417 tests 3 3 3 0 0 00:03:40.417 asserts 9 9 9 0 n/a 00:03:40.417 00:03:40.417 Elapsed time = 0.000 seconds 00:03:40.417 [2024-07-25 02:28:27.253808] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:40.417 00:03:40.417 real 0m9.957s 00:03:40.417 user 0m9.923s 00:03:40.417 sys 0m0.157s 00:03:40.417 02:28:27 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.418 02:28:27 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:40.418 ************************************ 00:03:40.418 END TEST unittest_blob_blobfs 00:03:40.418 ************************************ 00:03:40.418 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.418 02:28:27 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:40.418 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.418 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.418 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.418 ************************************ 00:03:40.418 START TEST unittest_event 00:03:40.418 ************************************ 00:03:40.418 02:28:27 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:03:40.418 02:28:27 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:40.418 00:03:40.418 00:03:40.418 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.418 http://cunit.sourceforge.net/ 00:03:40.418 00:03:40.418 00:03:40.418 Suite: app_suite 00:03:40.418 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:40.418 00:03:40.418 CPU options: 00:03:40.418 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:40.418 (like [0,1,10]) 00:03:40.418 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.418 [<,lcores[@CPUs]>...] 00:03:40.418 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.418 Within the group, '-' is used for range separator, 00:03:40.418 ',' is used for single number separator. 00:03:40.418 '( )' can be omitted for single element group, 00:03:40.418 '@' can be omitted if cpus and lcores have the same value 00:03:40.418 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.418 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:40.418 pollers in the app support interrupt mode) 00:03:40.418 -p, --main-core main (primary) core for DPDK 00:03:40.418 00:03:40.418 Configuration options: 00:03:40.418 -c, --config, --json JSON config file 00:03:40.418 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.418 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:40.418 app_ut: invalid option -- z 00:03:40.418 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.418 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.418 --json-ignore-init-errors don't exit on invalid config entry 00:03:40.418 00:03:40.418 Memory options: 00:03:40.418 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.418 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.418 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.418 -R, --huge-unlink unlink huge files after initialization 00:03:40.418 -n, --mem-channels number of memory channels used for DPDK 00:03:40.418 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.418 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.418 --no-huge run without using hugepages 00:03:40.418 -i, --shm-id shared memory ID (optional) 00:03:40.418 -g, --single-file-segments force creating just one hugetlbfs file 00:03:40.418 00:03:40.418 PCI options: 00:03:40.418 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.418 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.418 -u, --no-pci disable PCI access 00:03:40.418 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.418 00:03:40.418 Log options: 00:03:40.418 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:40.418 --silence-noticelog disable notice level logging to stderr 00:03:40.418 00:03:40.418 Trace options: 00:03:40.418 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:40.418 setting 0 to disable trace (default 32768) 00:03:40.418 Tracepoints vary in size and can use more than one trace entry. 00:03:40.418 -e, --tpoint-group [:] 00:03:40.418 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:40.418 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:40.418 a tracepoint group. First tpoint inside a group can be enabled by 00:03:40.418 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:40.418 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:40.418 in /include/spdk_internal/trace_defs.h 00:03:40.418 00:03:40.418 Other options: 00:03:40.418 -h, --help show this usage 00:03:40.418 -v, --version print SPDK version 00:03:40.418 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.418 --env-context Opaque context for use of the env implementation 00:03:40.418 app_ut [options] 00:03:40.418 00:03:40.418 CPU options: 00:03:40.418 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:40.418 (like [0,1,10]) 00:03:40.418 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.418 [<,lcores[@CPUs]>...] 00:03:40.418 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.418 Within the group, '-' is used for range separator, 00:03:40.418 ',' is used for single number separator. 00:03:40.418 '( )' can be omitted for single element group, 00:03:40.418 '@' can be omitted if cpus and lcores have the same value 00:03:40.418 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.418 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:40.418 pollers in the app support interrupt mode) 00:03:40.418 -p, --main-core main (primary) core for DPDK 00:03:40.418 00:03:40.418 Configuration options: 00:03:40.418 -c, --config, --json JSON config file 00:03:40.418 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.418 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:40.418 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.418 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.418 --json-ignore-init-errors don't exit on invalid config entry 00:03:40.418 00:03:40.418 Memory options: 00:03:40.418 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.418 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.418 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.418 -R, --huge-unlink unlink huge files after initialization 00:03:40.418 -n, --mem-channels number of memory channels used for DPDK 00:03:40.418 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.418 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.418 --no-huge run without using hugepages 00:03:40.418 -i, --shm-id shared memory ID (optional) 00:03:40.418 -g, --single-file-segments force creating just one hugetlbfs file 00:03:40.418 00:03:40.418 PCI options: 00:03:40.418 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.418 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.418 -u, --no-pci disable PCI access 00:03:40.418 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.418 00:03:40.418 Log options: 00:03:40.418 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:40.418 --silence-noticelog disable notice level logging to stderr 00:03:40.418 00:03:40.418 Trace options: 00:03:40.418 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:40.418 setting 0 to disable trace (default 32768) 00:03:40.418 Tracepoints vary in size and can use more than one trace entry. 00:03:40.418 -e, --tpoint-group [:] 00:03:40.418 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:40.418 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:40.418 a tracepoint group. First tpoint inside a group can be enabled by 00:03:40.418 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:40.418 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:40.418 in /include/spdk_internal/trace_defs.h 00:03:40.418 00:03:40.418 Other options: 00:03:40.418 -h, --help show this usage 00:03:40.418 -v, --version print SPDK version 00:03:40.418 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.418 --env-context Opaque context for use of the env implementation 00:03:40.418 app_ut: unrecognized option `--test-long-opt' 00:03:40.418 [2024-07-25 02:28:27.310046] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:40.418 app_ut [options] 00:03:40.418 00:03:40.418 CPU options: 00:03:40.418 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:40.418 (like [0,1,10]) 00:03:40.418 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.418 [<,lcores[@CPUs]>...] 00:03:40.419 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.419 Within the group, '-' is used for range separator, 00:03:40.419 [2024-07-25 02:28:27.310389] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:40.419 ',' is used for single number separator. 00:03:40.419 '( )' can be omitted for single element group, 00:03:40.419 '@' can be omitted if cpus and lcores have the same value 00:03:40.419 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.419 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:40.419 pollers in the app support interrupt mode) 00:03:40.419 -p, --main-core main (primary) core for DPDK 00:03:40.419 00:03:40.419 Configuration options: 00:03:40.419 -c, --config, --json JSON config file 00:03:40.419 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.419 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:40.419 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.419 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.419 --json-ignore-init-errors don't exit on invalid config entry 00:03:40.419 00:03:40.419 Memory options: 00:03:40.419 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.419 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.419 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.419 -R, --huge-unlink unlink huge files after initialization 00:03:40.419 -n, --mem-channels number of memory channels used for DPDK 00:03:40.419 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.419 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.419 --no-huge run without using hugepages 00:03:40.419 -i, --shm-id shared memory ID (optional) 00:03:40.419 -g, --single-file-segments force creating just one hugetlbfs file 00:03:40.419 00:03:40.419 PCI options: 00:03:40.419 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.419 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.419 -u, --no-pci disable PCI access 00:03:40.419 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.419 00:03:40.419 Log options: 00:03:40.419 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:40.419 --silence-noticelog disable notice level logging to stderr 00:03:40.419 00:03:40.419 Trace options: 00:03:40.419 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:40.419 setting 0 to disable trace (default 32768) 00:03:40.419 Tracepoints vary in size and can use more than one trace entry. 00:03:40.419 -e, --tpoint-group [:] 00:03:40.419 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:40.419 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:40.419 a tracepoint group. First tpoint inside a group can be enabled by 00:03:40.419 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:40.419 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:40.419 in /include/spdk_internal/trace_defs.h 00:03:40.419 00:03:40.419 Other options: 00:03:40.419 -h, --help show this usage 00:03:40.419 -v, --version print SPDK version 00:03:40.419 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.419 --env-context Opaque context for use of the env implementation 00:03:40.419 [2024-07-25 02:28:27.310581] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:40.419 passed 00:03:40.419 00:03:40.419 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.419 suites 1 1 n/a 0 0 00:03:40.419 tests 1 1 1 0 0 00:03:40.419 asserts 8 8 8 0 n/a 00:03:40.419 00:03:40.419 Elapsed time = 0.000 seconds 00:03:40.679 02:28:27 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: app_suite 00:03:40.680 Test: test_create_reactor ...passed 00:03:40.680 Test: test_init_reactors ...passed 00:03:40.680 Test: test_event_call ...passed 00:03:40.680 Test: test_schedule_thread ...passed 00:03:40.680 Test: test_reschedule_thread ...passed 00:03:40.680 Test: test_bind_thread ...passed 00:03:40.680 Test: test_for_each_reactor ...passed 00:03:40.680 Test: test_reactor_stats ...passed 00:03:40.680 Test: test_scheduler ...passed 00:03:40.680 Test: test_governor ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 1 1 n/a 0 0 00:03:40.680 tests 10 10 10 0 0 00:03:40.680 asserts 336 336 336 0 n/a 00:03:40.680 00:03:40.680 Elapsed time = 0.008 seconds 00:03:40.680 00:03:40.680 real 0m0.023s 00:03:40.680 user 0m0.007s 00:03:40.680 sys 0m0.017s 00:03:40.680 02:28:27 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.680 02:28:27 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 END TEST unittest_event 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 START TEST unittest_accel 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: accel_sequence 00:03:40.680 Test: test_sequence_fill_copy ...passed 00:03:40.680 Test: test_sequence_abort ...passed 00:03:40.680 Test: test_sequence_append_error ...passed 00:03:40.680 Test: test_sequence_completion_error ...[2024-07-25 02:28:27.392842] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x28d37e4ce140 00:03:40.680 [2024-07-25 02:28:27.393265] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x28d37e4ce140 00:03:40.680 [2024-07-25 02:28:27.393320] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x28d37e4ce140 00:03:40.680 [2024-07-25 02:28:27.393362] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1870:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x28d37e4ce140 00:03:40.680 passed 00:03:40.680 Test: test_sequence_decompress ...passed 00:03:40.680 Test: test_sequence_reverse ...passed 00:03:40.680 Test: test_sequence_copy_elision ...passed 00:03:40.680 Test: test_sequence_accel_buffers ...passed 00:03:40.680 Test: test_sequence_memory_domain ...[2024-07-25 02:28:27.395798] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1762:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:40.680 [2024-07-25 02:28:27.395890] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1801:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:40.680 passed 00:03:40.680 Test: test_sequence_module_memory_domain ...passed 00:03:40.680 Test: test_sequence_crypto ...passed 00:03:40.680 Test: test_sequence_driver ...[2024-07-25 02:28:27.397077] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1909:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x28d37e4ce600 using driver: ut 00:03:40.680 [2024-07-25 02:28:27.397140] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1974:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x28d37e4ce600 through driver: ut 00:03:40.680 passed 00:03:40.680 Test: test_sequence_same_iovs ...passed 00:03:40.680 Test: test_sequence_crc32 ...passed 00:03:40.680 Suite: accel 00:03:40.680 Test: test_spdk_accel_task_complete ...passed 00:03:40.680 Test: test_get_task ...passed 00:03:40.680 Test: test_spdk_accel_submit_copy ...passed 00:03:40.680 Test: test_spdk_accel_submit_dualcast ...[2024-07-25 02:28:27.398113] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:40.680 [2024-07-25 02:28:27.398148] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 425:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:40.680 passed 00:03:40.680 Test: test_spdk_accel_submit_compare ...passed 00:03:40.680 Test: test_spdk_accel_submit_fill ...passed 00:03:40.680 Test: test_spdk_accel_submit_crc32c ...passed 00:03:40.680 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:40.680 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:40.680 Test: test_spdk_accel_submit_xor ...passed 00:03:40.680 Test: test_spdk_accel_module_find_by_name ...passed 00:03:40.680 Test: test_spdk_accel_module_register ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 2 2 n/a 0 0 00:03:40.680 tests 26 26 26 0 0 00:03:40.680 asserts 830 830 830 0 n/a 00:03:40.680 00:03:40.680 Elapsed time = 0.008 seconds 00:03:40.680 00:03:40.680 real 0m0.019s 00:03:40.680 user 0m0.018s 00:03:40.680 sys 0m0.001s 00:03:40.680 02:28:27 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.680 02:28:27 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 END TEST unittest_accel 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 START TEST unittest_ioat 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: ioat 00:03:40.680 Test: ioat_state_check ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 1 1 n/a 0 0 00:03:40.680 tests 1 1 1 0 0 00:03:40.680 asserts 32 32 32 0 n/a 00:03:40.680 00:03:40.680 Elapsed time = 0.000 seconds 00:03:40.680 00:03:40.680 real 0m0.008s 00:03:40.680 user 0m0.000s 00:03:40.680 sys 0m0.012s 00:03:40.680 02:28:27 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.680 02:28:27 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 END TEST unittest_ioat 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:40.680 02:28:27 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.680 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.680 ************************************ 00:03:40.680 START TEST unittest_idxd_user 00:03:40.680 ************************************ 00:03:40.680 02:28:27 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: idxd_user 00:03:40.680 Test: test_idxd_wait_cmd ...[2024-07-25 02:28:27.520306] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:40.680 passed 00:03:40.680 Test: test_idxd_reset_dev ...[2024-07-25 02:28:27.520649] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:40.680 [2024-07-25 02:28:27.520714] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:40.680 [2024-07-25 02:28:27.520737] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:40.680 passed 00:03:40.680 Test: test_idxd_group_config ...passed 00:03:40.680 Test: test_idxd_wq_config ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 1 1 n/a 0 0 00:03:40.680 tests 4 4 4 0 0 00:03:40.680 asserts 20 20 20 0 n/a 00:03:40.680 00:03:40.681 Elapsed time = 0.000 seconds 00:03:40.681 00:03:40.681 real 0m0.009s 00:03:40.681 user 0m0.008s 00:03:40.681 sys 0m0.007s 00:03:40.681 02:28:27 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.681 02:28:27 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:40.681 ************************************ 00:03:40.681 END TEST unittest_idxd_user 00:03:40.681 ************************************ 00:03:40.681 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.681 02:28:27 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:40.681 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.681 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.681 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.681 ************************************ 00:03:40.681 START TEST unittest_iscsi 00:03:40.681 ************************************ 00:03:40.681 02:28:27 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:03:40.681 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:40.942 00:03:40.942 00:03:40.942 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.942 http://cunit.sourceforge.net/ 00:03:40.942 00:03:40.942 00:03:40.942 Suite: conn_suite 00:03:40.942 Test: read_task_split_in_order_case ...passed 00:03:40.942 Test: read_task_split_reverse_order_case ...passed 00:03:40.942 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:40.942 Test: process_non_read_task_completion_test ...passed 00:03:40.942 Test: free_tasks_on_connection ...passed 00:03:40.942 Test: free_tasks_with_queued_datain ...passed 00:03:40.942 Test: abort_queued_datain_task_test ...passed 00:03:40.942 Test: abort_queued_datain_tasks_test ...passed 00:03:40.942 00:03:40.942 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.942 suites 1 1 n/a 0 0 00:03:40.942 tests 8 8 8 0 0 00:03:40.942 asserts 230 230 230 0 n/a 00:03:40.942 00:03:40.942 Elapsed time = 0.000 seconds 00:03:40.942 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:40.942 00:03:40.942 00:03:40.942 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.942 http://cunit.sourceforge.net/ 00:03:40.942 00:03:40.942 00:03:40.942 Suite: iscsi_suite 00:03:40.942 Test: param_negotiation_test ...passed 00:03:40.942 Test: list_negotiation_test ...passed 00:03:40.942 Test: parse_valid_test ...passed 00:03:40.942 Test: parse_invalid_test ...[2024-07-25 02:28:27.586287] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:40.942 [2024-07-25 02:28:27.586652] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:40.942 [2024-07-25 02:28:27.586697] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:40.942 [2024-07-25 02:28:27.586751] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:40.942 [2024-07-25 02:28:27.586778] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:40.942 [2024-07-25 02:28:27.586802] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:40.942 passed 00:03:40.942 00:03:40.942 [2024-07-25 02:28:27.586825] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:40.942 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.942 suites 1 1 n/a 0 0 00:03:40.942 tests 4 4 4 0 0 00:03:40.942 asserts 161 161 161 0 n/a 00:03:40.942 00:03:40.942 Elapsed time = 0.000 seconds 00:03:40.942 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:40.942 00:03:40.942 00:03:40.942 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.942 http://cunit.sourceforge.net/ 00:03:40.942 00:03:40.942 00:03:40.942 Suite: iscsi_target_node_suite 00:03:40.942 Test: add_lun_test_cases ...[2024-07-25 02:28:27.595513] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:40.942 [2024-07-25 02:28:27.595873] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:40.942 [2024-07-25 02:28:27.595916] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:40.942 [2024-07-25 02:28:27.595938] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:40.942 passed 00:03:40.942 Test: allow_any_allowed ...passed 00:03:40.942 Test: allow_ipv6_allowed ...passed 00:03:40.942 Test: allow_ipv6_denied ...[2024-07-25 02:28:27.595957] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:40.942 passed 00:03:40.942 Test: allow_ipv6_invalid ...passed 00:03:40.942 Test: allow_ipv4_allowed ...passed 00:03:40.942 Test: allow_ipv4_denied ...passed 00:03:40.942 Test: allow_ipv4_invalid ...passed 00:03:40.942 Test: node_access_allowed ...passed 00:03:40.942 Test: node_access_denied_by_empty_netmask ...passed 00:03:40.942 Test: node_access_multi_initiator_groups_cases ...passed 00:03:40.942 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:40.942 Test: chap_param_test_cases ...[2024-07-25 02:28:27.596151] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:40.942 [2024-07-25 02:28:27.596181] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:40.942 [2024-07-25 02:28:27.596201] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:40.942 passed[2024-07-25 02:28:27.596221] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:40.942 [2024-07-25 02:28:27.596240] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:40.942 00:03:40.942 00:03:40.942 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.942 suites 1 1 n/a 0 0 00:03:40.942 tests 13 13 13 0 0 00:03:40.942 asserts 50 50 50 0 n/a 00:03:40.942 00:03:40.942 Elapsed time = 0.000 seconds 00:03:40.942 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:40.942 00:03:40.942 00:03:40.942 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.942 http://cunit.sourceforge.net/ 00:03:40.942 00:03:40.942 00:03:40.942 Suite: iscsi_suite 00:03:40.942 Test: op_login_check_target_test ...[2024-07-25 02:28:27.602200] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:03:40.942 passed 00:03:40.942 Test: op_login_session_normal_test ...[2024-07-25 02:28:27.602365] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:40.942 [2024-07-25 02:28:27.602378] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:40.943 [2024-07-25 02:28:27.602388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:40.943 [2024-07-25 02:28:27.602409] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:40.943 [2024-07-25 02:28:27.602420] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:40.943 passed 00:03:40.943 Test: maxburstlength_test ...[2024-07-25 02:28:27.602443] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:40.943 [2024-07-25 02:28:27.602452] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:40.943 [2024-07-25 02:28:27.602508] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:40.943 [2024-07-25 02:28:27.602522] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:40.943 passed 00:03:40.943 Test: underflow_for_read_transfer_test ...passed 00:03:40.943 Test: underflow_for_zero_read_transfer_test ...passed 00:03:40.943 Test: underflow_for_request_sense_test ...passed 00:03:40.943 Test: underflow_for_check_condition_test ...passed 00:03:40.943 Test: add_transfer_task_test ...passed 00:03:40.943 Test: get_transfer_task_test ...passed 00:03:40.943 Test: del_transfer_task_test ...passed 00:03:40.943 Test: clear_all_transfer_tasks_test ...passed 00:03:40.943 Test: build_iovs_test ...passed 00:03:40.943 Test: build_iovs_with_md_test ...passed 00:03:40.943 Test: pdu_hdr_op_login_test ...[2024-07-25 02:28:27.602666] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:40.943 [2024-07-25 02:28:27.602680] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:40.943 passed 00:03:40.943 Test: pdu_hdr_op_text_test ...[2024-07-25 02:28:27.602689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:40.943 [2024-07-25 02:28:27.602703] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:40.943 [2024-07-25 02:28:27.602712] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:40.943 passed 00:03:40.943 Test: pdu_hdr_op_logout_test ...passed 00:03:40.943 Test: pdu_hdr_op_scsi_test ...[2024-07-25 02:28:27.602722] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:40.943 [2024-07-25 02:28:27.602733] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:40.943 [2024-07-25 02:28:27.602745] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:40.943 [2024-07-25 02:28:27.602756] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:40.943 passed 00:03:40.943 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-25 02:28:27.602765] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:40.943 [2024-07-25 02:28:27.602774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:40.943 [2024-07-25 02:28:27.602783] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:40.943 [2024-07-25 02:28:27.602793] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:40.943 passed 00:03:40.943 Test: pdu_hdr_op_nopout_test ...[2024-07-25 02:28:27.602804] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:40.943 [2024-07-25 02:28:27.602820] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:40.943 [2024-07-25 02:28:27.602832] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:40.943 [2024-07-25 02:28:27.602841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:40.943 passed 00:03:40.943 Test: pdu_hdr_op_data_test ...[2024-07-25 02:28:27.602849] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:40.943 [2024-07-25 02:28:27.602857] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:40.943 [2024-07-25 02:28:27.602867] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:40.943 [2024-07-25 02:28:27.602877] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:40.943 [2024-07-25 02:28:27.602885] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:40.943 [2024-07-25 02:28:27.602894] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:40.943 [2024-07-25 02:28:27.602903] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:40.943 [2024-07-25 02:28:27.602912] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:40.943 [2024-07-25 02:28:27.602921] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:40.943 passed 00:03:40.943 Test: empty_text_with_cbit_test ...passed 00:03:40.943 Test: pdu_payload_read_test ...[2024-07-25 02:28:27.603299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:40.943 passed 00:03:40.943 Test: data_out_pdu_sequence_test ...passed 00:03:40.943 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:40.943 00:03:40.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.943 suites 1 1 n/a 0 0 00:03:40.943 tests 24 24 24 0 0 00:03:40.943 asserts 150253 150253 150253 0 n/a 00:03:40.943 00:03:40.943 Elapsed time = 0.000 seconds 00:03:40.943 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:40.943 00:03:40.943 00:03:40.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.943 http://cunit.sourceforge.net/ 00:03:40.943 00:03:40.943 00:03:40.943 Suite: init_grp_suite 00:03:40.943 Test: create_initiator_group_success_case ...passed 00:03:40.943 Test: find_initiator_group_success_case ...passed 00:03:40.943 Test: register_initiator_group_twice_case ...passed 00:03:40.943 Test: add_initiator_name_success_case ...passed 00:03:40.943 Test: add_initiator_name_fail_case ...[2024-07-25 02:28:27.613646] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:40.943 passed 00:03:40.943 Test: delete_all_initiator_names_success_case ...passed 00:03:40.943 Test: add_netmask_success_case ...passed 00:03:40.943 Test: add_netmask_fail_case ...[2024-07-25 02:28:27.614066] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:40.943 passed 00:03:40.943 Test: delete_all_netmasks_success_case ...passed 00:03:40.943 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:40.943 Test: netmask_overwrite_all_to_any_case ...passed 00:03:40.943 Test: add_delete_initiator_names_case ...passed 00:03:40.943 Test: add_duplicated_initiator_names_case ...passed 00:03:40.943 Test: delete_nonexisting_initiator_names_case ...passed 00:03:40.943 Test: add_delete_netmasks_case ...passed 00:03:40.943 Test: add_duplicated_netmasks_case ...passed 00:03:40.943 Test: delete_nonexisting_netmasks_case ...passed 00:03:40.943 00:03:40.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.943 suites 1 1 n/a 0 0 00:03:40.943 tests 17 17 17 0 0 00:03:40.943 asserts 108 108 108 0 n/a 00:03:40.943 00:03:40.943 Elapsed time = 0.000 seconds 00:03:40.943 02:28:27 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:40.943 00:03:40.943 00:03:40.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.943 http://cunit.sourceforge.net/ 00:03:40.943 00:03:40.943 00:03:40.943 Suite: portal_grp_suite 00:03:40.943 Test: portal_create_ipv4_normal_case ...passed 00:03:40.943 Test: portal_create_ipv6_normal_case ...passed 00:03:40.943 Test: portal_create_ipv4_wildcard_case ...passed 00:03:40.943 Test: portal_create_ipv6_wildcard_case ...passed 00:03:40.943 Test: portal_create_twice_case ...[2024-07-25 02:28:27.623690] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:40.943 passed 00:03:40.944 Test: portal_grp_register_unregister_case ...passed 00:03:40.944 Test: portal_grp_register_twice_case ...passed 00:03:40.944 Test: portal_grp_add_delete_case ...passed 00:03:40.944 Test: portal_grp_add_delete_twice_case ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 9 9 9 0 0 00:03:40.944 asserts 44 44 44 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.000 seconds 00:03:40.944 00:03:40.944 real 0m0.057s 00:03:40.944 user 0m0.022s 00:03:40.944 sys 0m0.035s 00:03:40.944 02:28:27 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.944 02:28:27 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 END TEST unittest_iscsi 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.944 02:28:27 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 START TEST unittest_json 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:03:40.944 02:28:27 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: json 00:03:40.944 Test: test_parse_literal ...passed 00:03:40.944 Test: test_parse_string_simple ...passed 00:03:40.944 Test: test_parse_string_control_chars ...passed 00:03:40.944 Test: test_parse_string_utf8 ...passed 00:03:40.944 Test: test_parse_string_escapes_twochar ...passed 00:03:40.944 Test: test_parse_string_escapes_unicode ...passed 00:03:40.944 Test: test_parse_number ...passed 00:03:40.944 Test: test_parse_array ...passed 00:03:40.944 Test: test_parse_object ...passed 00:03:40.944 Test: test_parse_nesting ...passed 00:03:40.944 Test: test_parse_comment ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 11 11 11 0 0 00:03:40.944 asserts 1516 1516 1516 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.000 seconds 00:03:40.944 02:28:27 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: json 00:03:40.944 Test: test_strequal ...passed 00:03:40.944 Test: test_num_to_uint16 ...passed 00:03:40.944 Test: test_num_to_int32 ...passed 00:03:40.944 Test: test_num_to_uint64 ...passed 00:03:40.944 Test: test_decode_object ...passed 00:03:40.944 Test: test_decode_array ...passed 00:03:40.944 Test: test_decode_bool ...passed 00:03:40.944 Test: test_decode_uint16 ...passed 00:03:40.944 Test: test_decode_int32 ...passed 00:03:40.944 Test: test_decode_uint32 ...passed 00:03:40.944 Test: test_decode_uint64 ...passed 00:03:40.944 Test: test_decode_string ...passed 00:03:40.944 Test: test_decode_uuid ...passed 00:03:40.944 Test: test_find ...passed 00:03:40.944 Test: test_find_array ...passed 00:03:40.944 Test: test_iterating ...passed 00:03:40.944 Test: test_free_object ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 17 17 17 0 0 00:03:40.944 asserts 236 236 236 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.000 seconds 00:03:40.944 02:28:27 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: json 00:03:40.944 Test: test_write_literal ...passed 00:03:40.944 Test: test_write_string_simple ...passed 00:03:40.944 Test: test_write_string_escapes ...passed 00:03:40.944 Test: test_write_string_utf16le ...passed 00:03:40.944 Test: test_write_number_int32 ...passed 00:03:40.944 Test: test_write_number_uint32 ...passed 00:03:40.944 Test: test_write_number_uint128 ...passed 00:03:40.944 Test: test_write_string_number_uint128 ...passed 00:03:40.944 Test: test_write_number_int64 ...passed 00:03:40.944 Test: test_write_number_uint64 ...passed 00:03:40.944 Test: test_write_number_double ...passed 00:03:40.944 Test: test_write_uuid ...passed 00:03:40.944 Test: test_write_array ...passed 00:03:40.944 Test: test_write_object ...passed 00:03:40.944 Test: test_write_nesting ...passed 00:03:40.944 Test: test_write_val ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 16 16 16 0 0 00:03:40.944 asserts 918 918 918 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.000 seconds 00:03:40.944 02:28:27 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: jsonrpc 00:03:40.944 Test: test_parse_request ...passed 00:03:40.944 Test: test_parse_request_streaming ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 2 2 2 0 0 00:03:40.944 asserts 289 289 289 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.008 seconds 00:03:40.944 00:03:40.944 real 0m0.037s 00:03:40.944 user 0m0.035s 00:03:40.944 sys 0m0.008s 00:03:40.944 02:28:27 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.944 02:28:27 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 END TEST unittest_json 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.944 02:28:27 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 START TEST unittest_rpc 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:03:40.944 02:28:27 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: rpc 00:03:40.944 Test: test_jsonrpc_handler ...passed 00:03:40.944 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:40.944 Test: test_rpc_get_methods ...[2024-07-25 02:28:27.767338] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:40.944 passed 00:03:40.944 Test: test_rpc_spdk_get_version ...passed 00:03:40.944 Test: test_spdk_rpc_listen_close ...passed 00:03:40.944 Test: test_rpc_run_multiple_servers ...passed 00:03:40.944 00:03:40.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.944 suites 1 1 n/a 0 0 00:03:40.944 tests 6 6 6 0 0 00:03:40.944 asserts 23 23 23 0 n/a 00:03:40.944 00:03:40.944 Elapsed time = 0.000 seconds 00:03:40.944 00:03:40.944 real 0m0.009s 00:03:40.944 user 0m0.001s 00:03:40.944 sys 0m0.009s 00:03:40.944 02:28:27 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.944 02:28:27 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 END TEST unittest_rpc 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:40.944 02:28:27 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.944 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:40.944 ************************************ 00:03:40.944 START TEST unittest_notify 00:03:40.944 ************************************ 00:03:40.944 02:28:27 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:40.944 00:03:40.944 00:03:40.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.944 http://cunit.sourceforge.net/ 00:03:40.944 00:03:40.944 00:03:40.944 Suite: app_suite 00:03:40.945 Test: notify ...passed 00:03:40.945 00:03:40.945 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.945 suites 1 1 n/a 0 0 00:03:40.945 tests 1 1 1 0 0 00:03:40.945 asserts 13 13 13 0 n/a 00:03:40.945 00:03:40.945 Elapsed time = 0.000 seconds 00:03:40.945 00:03:40.945 real 0m0.008s 00:03:40.945 user 0m0.008s 00:03:40.945 sys 0m0.001s 00:03:40.945 02:28:27 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.945 02:28:27 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:40.945 ************************************ 00:03:40.945 END TEST unittest_notify 00:03:40.945 ************************************ 00:03:41.205 02:28:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:41.205 02:28:27 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:41.205 02:28:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.205 02:28:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.205 02:28:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:41.205 ************************************ 00:03:41.205 START TEST unittest_nvme 00:03:41.205 ************************************ 00:03:41.205 02:28:27 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:03:41.205 02:28:27 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:41.205 00:03:41.205 00:03:41.205 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.205 http://cunit.sourceforge.net/ 00:03:41.205 00:03:41.205 00:03:41.205 Suite: nvme 00:03:41.205 Test: test_opc_data_transfer ...passed 00:03:41.205 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:41.205 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:41.205 Test: test_trid_parse_and_compare ...[2024-07-25 02:28:27.890193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:41.205 [2024-07-25 02:28:27.890572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.205 [2024-07-25 02:28:27.890618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:41.205 [2024-07-25 02:28:27.890640] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.205 [2024-07-25 02:28:27.890661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:03:41.205 [2024-07-25 02:28:27.890680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.205 passed 00:03:41.205 Test: test_trid_trtype_str ...passed 00:03:41.205 Test: test_trid_adrfam_str ...passed 00:03:41.205 Test: test_nvme_ctrlr_probe ...[2024-07-25 02:28:27.890904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:41.205 passed 00:03:41.205 Test: test_spdk_nvme_probe ...[2024-07-25 02:28:27.890958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.205 [2024-07-25 02:28:27.890979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:41.205 [2024-07-25 02:28:27.891002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:41.205 [2024-07-25 02:28:27.891022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:41.205 passed 00:03:41.205 Test: test_spdk_nvme_connect ...[2024-07-25 02:28:27.891072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:41.205 passed 00:03:41.205 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:41.205 Test: test_nvme_init_controllers ...passed 00:03:41.205 Test: test_nvme_driver_init ...[2024-07-25 02:28:27.891287] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.205 [2024-07-25 02:28:27.891353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:41.205 [2024-07-25 02:28:27.891374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:41.205 [2024-07-25 02:28:27.891400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:41.205 [2024-07-25 02:28:27.891432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:41.205 [2024-07-25 02:28:27.891452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.205 passed 00:03:41.205 Test: test_spdk_nvme_detach ...passed 00:03:41.205 Test: test_nvme_completion_poll_cb ...passed 00:03:41.205 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:41.205 Test: test_nvme_allocate_request_null ...passed 00:03:41.205 Test: test_nvme_allocate_request ...passed 00:03:41.205 Test: test_nvme_free_request ...passed 00:03:41.205 Test: test_nvme_allocate_request_user_copy ...passed 00:03:41.205 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:41.205 Test: test_nvme_request_check_timeout ...passed 00:03:41.205 Test: test_nvme_wait_for_completion ...passed 00:03:41.205 Test: test_spdk_nvme_parse_func ...passed 00:03:41.205 Test: test_spdk_nvme_detach_async ...passed 00:03:41.205 Test: test_nvme_parse_addr ...[2024-07-25 02:28:28.001819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:41.205 [2024-07-25 02:28:28.002106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1635:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:41.205 passed 00:03:41.205 00:03:41.205 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.205 suites 1 1 n/a 0 0 00:03:41.205 tests 25 25 25 0 0 00:03:41.205 asserts 326 326 326 0 n/a 00:03:41.205 00:03:41.205 Elapsed time = 0.008 seconds 00:03:41.205 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:41.205 00:03:41.205 00:03:41.205 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.205 http://cunit.sourceforge.net/ 00:03:41.205 00:03:41.205 00:03:41.205 Suite: nvme_ctrlr 00:03:41.205 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-25 02:28:28.012983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.205 passed 00:03:41.205 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-25 02:28:28.014626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.205 passed 00:03:41.205 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-25 02:28:28.015834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.205 passed 00:03:41.205 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-25 02:28:28.017029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.205 passed 00:03:41.205 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-25 02:28:28.018239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.205 [2024-07-25 02:28:28.019396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 02:28:28.020562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 02:28:28.021735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.206 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-25 02:28:28.024060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 [2024-07-25 02:28:28.026328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 02:28:28.027498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.206 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-25 02:28:28.029812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 [2024-07-25 02:28:28.030956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-25 02:28:28.033235] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.206 Test: test_nvme_ctrlr_init_delay ...[2024-07-25 02:28:28.035552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_alloc_io_qpair_rr_1 ...[2024-07-25 02:28:28.036728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 [2024-07-25 02:28:28.036806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:41.206 [2024-07-25 02:28:28.036848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.206 [2024-07-25 02:28:28.036886] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.206 [2024-07-25 02:28:28.036916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.206 passed 00:03:41.206 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:41.206 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:41.206 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-25 02:28:28.037000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-25 02:28:28.037040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-25 02:28:28.037067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:41.206 [2024-07-25 02:28:28.037111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:41.206 [2024-07-25 02:28:28.037134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:41.206 [2024-07-25 02:28:28.037156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:41.206 [2024-07-25 02:28:28.037177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_fail ...passed 00:03:41.206 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-07-25 02:28:28.037201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:41.206 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-25 02:28:28.037240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:41.206 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-25 02:28:28.038453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:41.206 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:41.206 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:41.206 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-25 02:28:28.079104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-25 02:28:28.085629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-25 02:28:28.086731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 [2024-07-25 02:28:28.086745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:41.206 passed 00:03:41.206 Test: test_alloc_io_qpair_fail ...passed 00:03:41.206 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:41.206 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:41.206 Test: test_nvme_ctrlr_set_state ...passed 00:03:41.206 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-25 02:28:28.087837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 [2024-07-25 02:28:28.087854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:41.206 [2024-07-25 02:28:28.087868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:41.206 [2024-07-25 02:28:28.087876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-25 02:28:28.091179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-25 02:28:28.098400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_reset ...[2024-07-25 02:28:28.099554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.206 passed 00:03:41.206 Test: test_nvme_ctrlr_aer_callback ...[2024-07-25 02:28:28.099607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.478 passed 00:03:41.479 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-25 02:28:28.100727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:41.479 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:41.479 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-25 02:28:28.101935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:41.479 Test: test_nvme_ctrlr_ana_resize ...[2024-07-25 02:28:28.103091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:41.479 Test: test_nvme_transport_ctrlr_ready ...[2024-07-25 02:28:28.104256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ctrlr_disable ...[2024-07-25 02:28:28.104279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:03:41.479 [2024-07-25 02:28:28.104293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.479 passed 00:03:41.479 00:03:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.479 suites 1 1 n/a 0 0 00:03:41.479 tests 44 44 44 0 0 00:03:41.479 asserts 10434 10434 10434 0 n/a 00:03:41.479 00:03:41.479 Elapsed time = 0.055 seconds 00:03:41.479 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:41.479 00:03:41.479 00:03:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.479 http://cunit.sourceforge.net/ 00:03:41.479 00:03:41.479 00:03:41.479 Suite: nvme_ctrlr_cmd 00:03:41.479 Test: test_get_log_pages ...passed 00:03:41.479 Test: test_set_feature_cmd ...passed 00:03:41.479 Test: test_set_feature_ns_cmd ...passed 00:03:41.479 Test: test_get_feature_cmd ...passed 00:03:41.479 Test: test_get_feature_ns_cmd ...passed 00:03:41.479 Test: test_abort_cmd ...passed 00:03:41.479 Test: test_set_host_id_cmds ...[2024-07-25 02:28:28.112835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:41.479 passed 00:03:41.479 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:41.479 Test: test_io_raw_cmd ...passed 00:03:41.479 Test: test_io_raw_cmd_with_md ...passed 00:03:41.479 Test: test_namespace_attach ...passed 00:03:41.479 Test: test_namespace_detach ...passed 00:03:41.479 Test: test_namespace_create ...passed 00:03:41.479 Test: test_namespace_delete ...passed 00:03:41.479 Test: test_doorbell_buffer_config ...passed 00:03:41.479 Test: test_format_nvme ...passed 00:03:41.479 Test: test_fw_commit ...passed 00:03:41.479 Test: test_fw_image_download ...passed 00:03:41.479 Test: test_sanitize ...passed 00:03:41.479 Test: test_directive ...passed 00:03:41.479 Test: test_nvme_request_add_abort ...passed 00:03:41.479 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:41.479 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:41.479 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:41.479 00:03:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.479 suites 1 1 n/a 0 0 00:03:41.479 tests 24 24 24 0 0 00:03:41.479 asserts 198 198 198 0 n/a 00:03:41.479 00:03:41.479 Elapsed time = 0.000 seconds 00:03:41.479 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:41.479 00:03:41.479 00:03:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.479 http://cunit.sourceforge.net/ 00:03:41.479 00:03:41.479 00:03:41.479 Suite: nvme_ctrlr_cmd 00:03:41.479 Test: test_geometry_cmd ...passed 00:03:41.479 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:41.479 00:03:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.479 suites 1 1 n/a 0 0 00:03:41.479 tests 2 2 2 0 0 00:03:41.479 asserts 7 7 7 0 n/a 00:03:41.479 00:03:41.479 Elapsed time = 0.000 seconds 00:03:41.479 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:41.479 00:03:41.479 00:03:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.479 http://cunit.sourceforge.net/ 00:03:41.479 00:03:41.479 00:03:41.479 Suite: nvme 00:03:41.479 Test: test_nvme_ns_construct ...passed 00:03:41.479 Test: test_nvme_ns_uuid ...passed 00:03:41.479 Test: test_nvme_ns_csi ...passed 00:03:41.479 Test: test_nvme_ns_data ...passed 00:03:41.479 Test: test_nvme_ns_set_identify_data ...passed 00:03:41.479 Test: test_spdk_nvme_ns_get_values ...passed 00:03:41.479 Test: test_spdk_nvme_ns_is_active ...passed 00:03:41.479 Test: spdk_nvme_ns_supports ...passed 00:03:41.479 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:41.479 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:41.479 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:41.479 Test: test_nvme_ns_find_id_desc ...passed 00:03:41.479 00:03:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.479 suites 1 1 n/a 0 0 00:03:41.479 tests 12 12 12 0 0 00:03:41.479 asserts 95 95 95 0 n/a 00:03:41.479 00:03:41.479 Elapsed time = 0.000 seconds 00:03:41.479 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:41.479 00:03:41.479 00:03:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.479 http://cunit.sourceforge.net/ 00:03:41.479 00:03:41.479 00:03:41.479 Suite: nvme_ns_cmd 00:03:41.479 Test: split_test ...passed 00:03:41.479 Test: split_test2 ...passed 00:03:41.479 Test: split_test3 ...passed 00:03:41.479 Test: split_test4 ...passed 00:03:41.479 Test: test_nvme_ns_cmd_flush ...passed 00:03:41.479 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:41.479 Test: test_nvme_ns_cmd_copy ...passed 00:03:41.479 Test: test_io_flags ...[2024-07-25 02:28:28.138054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:41.479 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:41.479 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:41.479 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:41.479 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:41.479 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:41.479 Test: test_cmd_child_request ...passed 00:03:41.479 Test: test_nvme_ns_cmd_readv ...passed 00:03:41.479 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_writev ...[2024-07-25 02:28:28.138528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_comparev ...passed 00:03:41.479 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:41.479 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:41.479 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:41.479 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:41.479 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-25 02:28:28.138745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:41.479 passed 00:03:41.479 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:41.479 Test: test_nvme_ns_cmd_verify ...[2024-07-25 02:28:28.138776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:41.479 passed 00:03:41.479 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:41.479 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:41.479 00:03:41.479 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.479 suites 1 1 n/a 0 0 00:03:41.479 tests 32 32 32 0 0 00:03:41.479 asserts 550 550 550 0 n/a 00:03:41.479 00:03:41.479 Elapsed time = 0.000 seconds 00:03:41.479 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:41.479 00:03:41.479 00:03:41.479 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.479 http://cunit.sourceforge.net/ 00:03:41.479 00:03:41.479 00:03:41.479 Suite: nvme_ns_cmd 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:41.479 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:41.480 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:41.480 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:41.480 00:03:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.480 suites 1 1 n/a 0 0 00:03:41.480 tests 12 12 12 0 0 00:03:41.480 asserts 123 123 123 0 n/a 00:03:41.480 00:03:41.480 Elapsed time = 0.000 seconds 00:03:41.480 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:41.480 00:03:41.480 00:03:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.480 http://cunit.sourceforge.net/ 00:03:41.480 00:03:41.480 00:03:41.480 Suite: nvme_qpair 00:03:41.480 Test: test3 ...passed 00:03:41.480 Test: test_ctrlr_failed ...passed 00:03:41.480 Test: struct_packing ...passed 00:03:41.480 Test: test_nvme_qpair_process_completions ...[2024-07-25 02:28:28.157906] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.480 [2024-07-25 02:28:28.158202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.480 [2024-07-25 02:28:28.158299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:41.480 [2024-07-25 02:28:28.158324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:41.480 passed 00:03:41.480 Test: test_nvme_completion_is_retry ...passed 00:03:41.480 Test: test_get_status_string ...passed 00:03:41.480 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:41.480 Test: test_nvme_qpair_submit_request ...passed 00:03:41.480 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:41.480 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:41.480 Test: test_nvme_qpair_init_deinit ...[2024-07-25 02:28:28.158405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.480 passed 00:03:41.480 Test: test_nvme_get_sgl_print_info ...passed 00:03:41.480 00:03:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.480 suites 1 1 n/a 0 0 00:03:41.480 tests 12 12 12 0 0 00:03:41.480 asserts 154 154 154 0 n/a 00:03:41.480 00:03:41.480 Elapsed time = 0.000 seconds 00:03:41.480 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:41.480 00:03:41.480 00:03:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.480 http://cunit.sourceforge.net/ 00:03:41.480 00:03:41.480 00:03:41.480 Suite: nvme_pcie 00:03:41.480 Test: test_prp_list_append ...[2024-07-25 02:28:28.167435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:41.480 [2024-07-25 02:28:28.167804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:41.480 [2024-07-25 02:28:28.167849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1225:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:41.480 [2024-07-25 02:28:28.167941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:41.480 passed 00:03:41.480 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-25 02:28:28.167986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1219:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:41.480 passed 00:03:41.480 Test: test_shadow_doorbell_update ...passed 00:03:41.480 Test: test_build_contig_hw_sgl_request ...passed 00:03:41.480 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:41.480 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:41.480 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:41.480 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:41.480 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed[2024-07-25 02:28:28.168153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1206:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:41.480 00:03:41.480 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:41.480 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-25 02:28:28.168220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:41.480 passed 00:03:41.480 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:41.480 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-25 02:28:28.168258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:41.480 passed 00:03:41.480 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-25 02:28:28.168288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:41.480 [2024-07-25 02:28:28.168311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:41.480 passed 00:03:41.480 00:03:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.480 suites 1 1 n/a 0 0 00:03:41.480 tests 14 14 14 0 0 00:03:41.480 asserts 235 235 235 0 n/a 00:03:41.480 00:03:41.480 Elapsed time = 0.000 seconds 00:03:41.480 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:41.480 00:03:41.480 00:03:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.480 http://cunit.sourceforge.net/ 00:03:41.480 00:03:41.480 00:03:41.480 Suite: nvme_ns_cmd 00:03:41.480 Test: nvme_poll_group_create_test ...passed 00:03:41.480 Test: nvme_poll_group_add_remove_test ...passed 00:03:41.480 Test: nvme_poll_group_process_completions ...passed 00:03:41.480 Test: nvme_poll_group_destroy_test ...passed 00:03:41.480 Test: nvme_poll_group_get_free_stats ...passed 00:03:41.480 00:03:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.480 suites 1 1 n/a 0 0 00:03:41.480 tests 5 5 5 0 0 00:03:41.480 asserts 75 75 75 0 n/a 00:03:41.480 00:03:41.480 Elapsed time = 0.000 seconds 00:03:41.480 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:41.480 00:03:41.480 00:03:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.480 http://cunit.sourceforge.net/ 00:03:41.480 00:03:41.480 00:03:41.480 Suite: nvme_quirks 00:03:41.480 Test: test_nvme_quirks_striping ...passed 00:03:41.480 00:03:41.480 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.480 suites 1 1 n/a 0 0 00:03:41.480 tests 1 1 1 0 0 00:03:41.480 asserts 5 5 5 0 n/a 00:03:41.480 00:03:41.480 Elapsed time = 0.000 seconds 00:03:41.480 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:41.480 00:03:41.480 00:03:41.480 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.480 http://cunit.sourceforge.net/ 00:03:41.480 00:03:41.480 00:03:41.480 Suite: nvme_tcp 00:03:41.480 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:41.480 Test: test_nvme_tcp_build_iovs ...passed 00:03:41.480 Test: test_nvme_tcp_build_sgl_request ...[2024-07-25 02:28:28.187253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820a695c8, and the iovcnt=16, remaining_size=28672 00:03:41.480 passed 00:03:41.480 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:41.480 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:41.480 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:41.480 Test: test_nvme_tcp_req_get ...passed 00:03:41.480 Test: test_nvme_tcp_req_init ...passed 00:03:41.480 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:41.480 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:41.480 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:41.480 Test: test_nvme_tcp_alloc_reqs ...[2024-07-25 02:28:28.187702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(6) to be set 00:03:41.480 passed 00:03:41.480 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-25 02:28:28.187773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.480 passed 00:03:41.480 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-25 02:28:28.187802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820a6a908 00:03:41.480 [2024-07-25 02:28:28.187821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:41.480 [2024-07-25 02:28:28.187838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.480 [2024-07-25 02:28:28.187855] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:41.480 [2024-07-25 02:28:28.187871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.480 [2024-07-25 02:28:28.187888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:41.480 [2024-07-25 02:28:28.187904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.480 [2024-07-25 02:28:28.187925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.187942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.187958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.187974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.187990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-25 02:28:28.188042] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:41.481 [2024-07-25 02:28:28.188060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:41.481 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:41.481 Test: test_nvme_tcp_icresp_handle ...[2024-07-25 02:28:28.269736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:41.481 [2024-07-25 02:28:28.269883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820a6ad40): PDU Sequence Error 00:03:41.481 [2024-07-25 02:28:28.269918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:41.481 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-25 02:28:28.269943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:41.481 [2024-07-25 02:28:28.269965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.269987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:41.481 [2024-07-25 02:28:28.270007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.270028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a6b178 is same with the state(0) to be set 00:03:41.481 [2024-07-25 02:28:28.270077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820a6ad40): PDU Sequence Error 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-07-25 02:28:28.270118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820a6b178 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-25 02:28:28.270189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820a68ed8, errno=0, rc=0 00:03:41.481 [2024-07-25 02:28:28.270213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a68ed8 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.270233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820a68ed8 is same with the state(5) to be set 00:03:41.481 [2024-07-25 02:28:28.270323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820a68ed8 (0): No error: 0 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-25 02:28:28.270346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820a68ed8 (0): No error: 0 00:03:41.481 passed 00:03:41.481 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:41.481 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:41.481 Test: test_nvme_tcp_ctrlr_construct ...passed 00:03:41.481 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:41.481 00:03:41.481 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.481 suites 1 1 n/a 0 0 00:03:41.481 tests 27 27 27 0 0 00:03:41.481 asserts 624 624 624 0 n/a 00:03:41.481 00:03:41.481 Elapsed time = 0.062 seconds 00:03:41.481 [2024-07-25 02:28:28.331683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:41.481 [2024-07-25 02:28:28.331735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:41.481 [2024-07-25 02:28:28.331782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.481 [2024-07-25 02:28:28.331788] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.481 [2024-07-25 02:28:28.331818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:41.481 [2024-07-25 02:28:28.331824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.481 [2024-07-25 02:28:28.331831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:41.481 [2024-07-25 02:28:28.331837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.481 [2024-07-25 02:28:28.331846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9356266b000 with addr=192.168.1.78, port=23 00:03:41.481 [2024-07-25 02:28:28.331851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.481 [2024-07-25 02:28:28.331862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x93562639180, and the iovcnt=1, remaining_size=1024 00:03:41.481 [2024-07-25 02:28:28.331867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:41.481 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:41.481 00:03:41.481 00:03:41.481 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.481 http://cunit.sourceforge.net/ 00:03:41.481 00:03:41.481 00:03:41.481 Suite: nvme_transport 00:03:41.481 Test: test_nvme_get_transport ...passed 00:03:41.481 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:41.481 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:41.481 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:41.481 Test: test_ctrlr_get_memory_domains ...passed 00:03:41.481 00:03:41.481 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.481 suites 1 1 n/a 0 0 00:03:41.481 tests 5 5 5 0 0 00:03:41.481 asserts 28 28 28 0 n/a 00:03:41.481 00:03:41.481 Elapsed time = 0.000 seconds 00:03:41.481 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:41.481 00:03:41.481 00:03:41.481 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.481 http://cunit.sourceforge.net/ 00:03:41.481 00:03:41.481 00:03:41.481 Suite: nvme_io_msg 00:03:41.481 Test: test_nvme_io_msg_send ...passed 00:03:41.481 Test: test_nvme_io_msg_process ...passed 00:03:41.481 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:41.481 00:03:41.481 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.481 suites 1 1 n/a 0 0 00:03:41.481 tests 3 3 3 0 0 00:03:41.481 asserts 56 56 56 0 n/a 00:03:41.481 00:03:41.481 Elapsed time = 0.000 seconds 00:03:41.481 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:41.481 00:03:41.481 00:03:41.481 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.481 http://cunit.sourceforge.net/ 00:03:41.481 00:03:41.481 00:03:41.481 Suite: nvme_pcie_common 00:03:41.481 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:03:41.481 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:41.481 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:41.481 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-25 02:28:28.355379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:41.481 [2024-07-25 02:28:28.355529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 505:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:41.481 [2024-07-25 02:28:28.355537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 458:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:41.481 [2024-07-25 02:28:28.355543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 552:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:41.481 passed 00:03:41.481 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:03:41.481 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:03:41.481 00:03:41.481 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.481 suites 1 1 n/a 0 0 00:03:41.481 tests 6 6 6 0 0 00:03:41.481 asserts 148 148 148 0 n/a 00:03:41.481 00:03:41.482 Elapsed time = 0.000 seconds 00:03:41.482 [2024-07-25 02:28:28.355603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.482 [2024-07-25 02:28:28.355609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1804:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.482 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:41.482 00:03:41.482 00:03:41.482 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.482 http://cunit.sourceforge.net/ 00:03:41.482 00:03:41.482 00:03:41.482 Suite: nvme_fabric 00:03:41.482 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:41.482 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:41.482 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:41.482 Test: test_nvme_fabric_discover_probe ...passed 00:03:41.482 Test: test_nvme_fabric_qpair_connect ...[2024-07-25 02:28:28.360051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:41.482 passed 00:03:41.482 00:03:41.482 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.482 suites 1 1 n/a 0 0 00:03:41.482 tests 5 5 5 0 0 00:03:41.482 asserts 60 60 60 0 n/a 00:03:41.482 00:03:41.482 Elapsed time = 0.000 seconds 00:03:41.740 02:28:28 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:41.740 00:03:41.740 00:03:41.740 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.740 http://cunit.sourceforge.net/ 00:03:41.740 00:03:41.740 00:03:41.740 Suite: nvme_opal 00:03:41.740 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:41.740 Test: test_opal_add_short_atom_header ...passed 00:03:41.740 00:03:41.740 [2024-07-25 02:28:28.364968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:41.740 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.740 suites 1 1 n/a 0 0 00:03:41.740 tests 2 2 2 0 0 00:03:41.740 asserts 22 22 22 0 n/a 00:03:41.740 00:03:41.741 Elapsed time = 0.000 seconds 00:03:41.741 00:03:41.741 real 0m0.483s 00:03:41.741 user 0m0.146s 00:03:41.741 sys 0m0.109s 00:03:41.741 02:28:28 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.741 02:28:28 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:03:41.741 ************************************ 00:03:41.741 END TEST unittest_nvme 00:03:41.741 ************************************ 00:03:41.741 02:28:28 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:41.741 02:28:28 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:41.741 02:28:28 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.741 02:28:28 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.741 02:28:28 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:41.741 ************************************ 00:03:41.741 START TEST unittest_log 00:03:41.741 ************************************ 00:03:41.741 02:28:28 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:41.741 00:03:41.741 00:03:41.741 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.741 http://cunit.sourceforge.net/ 00:03:41.741 00:03:41.741 00:03:41.741 Suite: log 00:03:41.741 Test: log_test ...[2024-07-25 02:28:28.422288] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:41.741 [2024-07-25 02:28:28.422650] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:41.741 log dump test: 00:03:41.741 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:41.741 spdk dump test: 00:03:41.741 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:41.741 spdk dump test: 00:03:41.741 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:41.741 passed 00:03:41.741 Test: deprecation ...00000010 65 20 63 68 61 72 73 e chars 00:03:42.680 passed 00:03:42.680 00:03:42.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.680 suites 1 1 n/a 0 0 00:03:42.680 tests 2 2 2 0 0 00:03:42.680 asserts 73 73 73 0 n/a 00:03:42.680 00:03:42.680 Elapsed time = 0.000 seconds 00:03:42.680 00:03:42.680 real 0m1.072s 00:03:42.680 user 0m0.008s 00:03:42.680 sys 0m0.001s 00:03:42.680 02:28:29 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.680 02:28:29 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:03:42.680 ************************************ 00:03:42.680 END TEST unittest_log 00:03:42.680 ************************************ 00:03:42.680 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:42.680 02:28:29 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:42.680 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.680 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.680 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:42.680 ************************************ 00:03:42.680 START TEST unittest_lvol 00:03:42.680 ************************************ 00:03:42.680 02:28:29 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:42.680 00:03:42.680 00:03:42.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.680 http://cunit.sourceforge.net/ 00:03:42.680 00:03:42.680 00:03:42.680 Suite: lvol 00:03:42.680 Test: lvs_init_unload_success ...[2024-07-25 02:28:29.548554] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:42.680 passed 00:03:42.680 Test: lvs_init_destroy_success ...passed 00:03:42.680 Test: lvs_init_opts_success ...[2024-07-25 02:28:29.548958] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:42.680 passed 00:03:42.680 Test: lvs_unload_lvs_is_null_fail ...[2024-07-25 02:28:29.549012] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:42.680 passed 00:03:42.680 Test: lvs_names ...[2024-07-25 02:28:29.549041] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:42.680 [2024-07-25 02:28:29.549063] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:42.680 [2024-07-25 02:28:29.549107] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:42.680 passed 00:03:42.680 Test: lvol_create_destroy_success ...passed 00:03:42.680 Test: lvol_create_fail ...[2024-07-25 02:28:29.549230] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:42.680 [2024-07-25 02:28:29.549259] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:42.680 passed 00:03:42.680 Test: lvol_destroy_fail ...[2024-07-25 02:28:29.549324] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:42.680 passed 00:03:42.680 Test: lvol_close ...[2024-07-25 02:28:29.549372] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:42.680 [2024-07-25 02:28:29.549395] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:42.680 passed 00:03:42.680 Test: lvol_resize ...passed 00:03:42.680 Test: lvol_set_read_only ...passed 00:03:42.680 Test: test_lvs_load ...[2024-07-25 02:28:29.549509] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:42.680 [2024-07-25 02:28:29.549537] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:42.680 passed 00:03:42.680 Test: lvols_load ...[2024-07-25 02:28:29.549583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:42.680 [2024-07-25 02:28:29.549647] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:42.680 passed 00:03:42.680 Test: lvol_open ...passed 00:03:42.680 Test: lvol_snapshot ...passed 00:03:42.680 Test: lvol_snapshot_fail ...[2024-07-25 02:28:29.549830] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:42.680 passed 00:03:42.680 Test: lvol_clone ...passed 00:03:42.680 Test: lvol_clone_fail ...[2024-07-25 02:28:29.549937] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:42.680 passed 00:03:42.680 Test: lvol_iter_clones ...passed 00:03:42.680 Test: lvol_refcnt ...[2024-07-25 02:28:29.550031] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 946671cb-4a2d-11ef-9c8e-7947904e2597 because it is still open 00:03:42.680 passed 00:03:42.680 Test: lvol_names ...[2024-07-25 02:28:29.550069] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:42.680 [2024-07-25 02:28:29.550096] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:42.680 [2024-07-25 02:28:29.550135] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:42.680 passed 00:03:42.680 Test: lvol_create_thin_provisioned ...passed 00:03:42.680 Test: lvol_rename ...[2024-07-25 02:28:29.550218] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:42.680 [2024-07-25 02:28:29.550251] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:42.680 passed 00:03:42.680 Test: lvs_rename ...[2024-07-25 02:28:29.550308] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:42.680 passed 00:03:42.680 Test: lvol_inflate ...[2024-07-25 02:28:29.550357] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:42.680 passed 00:03:42.680 Test: lvol_decouple_parent ...[2024-07-25 02:28:29.550397] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:42.680 passed 00:03:42.680 Test: lvol_get_xattr ...passed 00:03:42.680 Test: lvol_esnap_reload ...passed 00:03:42.680 Test: lvol_esnap_create_bad_args ...[2024-07-25 02:28:29.550484] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:42.680 [2024-07-25 02:28:29.550506] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:42.680 [2024-07-25 02:28:29.550528] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:42.681 [2024-07-25 02:28:29.550564] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:42.681 [2024-07-25 02:28:29.550598] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:42.681 passed 00:03:42.681 Test: lvol_esnap_create_delete ...passed 00:03:42.681 Test: lvol_esnap_load_esnaps ...[2024-07-25 02:28:29.550665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:42.681 passed 00:03:42.681 Test: lvol_esnap_missing ...[2024-07-25 02:28:29.550719] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:42.681 [2024-07-25 02:28:29.550746] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:42.681 passed 00:03:42.681 Test: lvol_esnap_hotplug ... 00:03:42.681 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:42.681 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:42.681 [2024-07-25 02:28:29.550885] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 94669312-4a2d-11ef-9c8e-7947904e2597: failed to create esnap bs_dev: error -12 00:03:42.681 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:42.681 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:42.681 [2024-07-25 02:28:29.551003] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 94669765-4a2d-11ef-9c8e-7947904e2597: failed to create esnap bs_dev: error -12 00:03:42.681 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:42.681 [2024-07-25 02:28:29.551064] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 94669a18-4a2d-11ef-9c8e-7947904e2597: failed to create esnap bs_dev: error -12 00:03:42.681 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:42.681 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:42.681 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:42.681 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:42.681 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:42.681 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:42.681 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:42.681 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:42.681 passed 00:03:42.681 Test: lvol_get_by ...passed 00:03:42.681 Test: lvol_shallow_copy ...[2024-07-25 02:28:29.551485] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:42.681 [2024-07-25 02:28:29.551516] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 9466aa95-4a2d-11ef-9c8e-7947904e2597 shallow copy, ext_dev must not be NULL 00:03:42.681 passed 00:03:42.681 Test: lvol_set_parent ...[2024-07-25 02:28:29.551570] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:03:42.681 [2024-07-25 02:28:29.551597] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:03:42.681 passed 00:03:42.681 Test: lvol_set_external_parent ...[2024-07-25 02:28:29.551638] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:03:42.681 [2024-07-25 02:28:29.551665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:03:42.681 [2024-07-25 02:28:29.551686] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:03:42.681 passed 00:03:42.681 00:03:42.681 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.681 suites 1 1 n/a 0 0 00:03:42.681 tests 37 37 37 0 0 00:03:42.681 asserts 1505 1505 1505 0 n/a 00:03:42.681 00:03:42.681 Elapsed time = 0.008 seconds 00:03:42.681 00:03:42.681 real 0m0.015s 00:03:42.681 user 0m0.010s 00:03:42.681 sys 0m0.008s 00:03:42.681 02:28:29 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.681 02:28:29 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:03:42.681 ************************************ 00:03:42.681 END TEST unittest_lvol 00:03:42.681 ************************************ 00:03:42.942 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:42.942 02:28:29 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.942 02:28:29 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:42.942 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.942 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.942 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:42.942 ************************************ 00:03:42.942 START TEST unittest_nvme_rdma 00:03:42.942 ************************************ 00:03:42.942 02:28:29 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:42.942 00:03:42.942 00:03:42.942 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.942 http://cunit.sourceforge.net/ 00:03:42.942 00:03:42.942 00:03:42.942 Suite: nvme_rdma 00:03:42.942 Test: test_nvme_rdma_build_sgl_request ...[2024-07-25 02:28:29.613944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:42.942 [2024-07-25 02:28:29.614097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:42.942 [2024-07-25 02:28:29.614111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:42.942 passed 00:03:42.942 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:42.942 Test: test_nvme_rdma_build_contig_request ...passed 00:03:42.942 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:42.942 Test: test_nvme_rdma_create_reqs ...[2024-07-25 02:28:29.614123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:42.942 [2024-07-25 02:28:29.614140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:42.942 passed 00:03:42.942 Test: test_nvme_rdma_create_rsps ...passed 00:03:42.942 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:03:42.942 Test: test_nvme_rdma_poller_create ...[2024-07-25 02:28:29.614160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:42.942 [2024-07-25 02:28:29.614176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:42.942 [2024-07-25 02:28:29.614186] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:42.942 passed 00:03:42.942 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:42.942 Test: test_nvme_rdma_ctrlr_construct ...passed 00:03:42.942 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:42.942 Test: test_nvme_rdma_req_init ...passed 00:03:42.942 Test: test_nvme_rdma_validate_cm_event ...[2024-07-25 02:28:29.614202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:42.942 [2024-07-25 02:28:29.614241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:42.942 [2024-07-25 02:28:29.614248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:42.942 passed 00:03:42.942 Test: test_nvme_rdma_qpair_init ...passed 00:03:42.942 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:42.942 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:42.942 Test: test_rdma_get_memory_translation ...passed 00:03:42.942 Test: test_get_rdma_qpair_from_wc ...passed 00:03:42.942 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:42.942 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:42.942 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-25 02:28:29.614260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:42.942 [2024-07-25 02:28:29.614298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:42.942 [2024-07-25 02:28:29.614313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:42.942 [2024-07-25 02:28:29.614319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:42.942 [2024-07-25 02:28:29.614332] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:42.942 [2024-07-25 02:28:29.614338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:42.942 [2024-07-25 02:28:29.614345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8207f4448 on poll group 0x3486ecc72000 00:03:42.943 [2024-07-25 02:28:29.614350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:42.943 [2024-07-25 02:28:29.614355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:42.943 [2024-07-25 02:28:29.614361] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8207f4448 on poll group 0x3486ecc72000 00:03:42.943 passed 00:03:42.943 00:03:42.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.943 suites 1 1 n/a 0 0 00:03:42.943 tests 21 21 21 0 0 00:03:42.943 asserts 397 397 397 0 n/a 00:03:42.943 00:03:42.943 Elapsed time = 0.000 seconds[2024-07-25 02:28:29.614394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:42.943 00:03:42.943 00:03:42.943 real 0m0.005s 00:03:42.943 user 0m0.000s 00:03:42.943 sys 0m0.004s 00:03:42.943 02:28:29 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.943 02:28:29 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 END TEST unittest_nvme_rdma 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:42.943 02:28:29 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 START TEST unittest_nvmf_transport 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:42.943 00:03:42.943 00:03:42.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.943 http://cunit.sourceforge.net/ 00:03:42.943 00:03:42.943 00:03:42.943 Suite: nvmf 00:03:42.943 Test: test_spdk_nvmf_transport_create ...[2024-07-25 02:28:29.662422] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:42.943 [2024-07-25 02:28:29.662833] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:42.943 [2024-07-25 02:28:29.662880] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:42.943 [2024-07-25 02:28:29.662929] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:42.943 passed 00:03:42.943 Test: test_nvmf_transport_poll_group_create ...passed 00:03:42.943 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-25 02:28:29.663017] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:42.943 [2024-07-25 02:28:29.663039] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:42.943 [2024-07-25 02:28:29.663061] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:42.943 passed 00:03:42.943 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:42.943 00:03:42.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.943 suites 1 1 n/a 0 0 00:03:42.943 tests 4 4 4 0 0 00:03:42.943 asserts 49 49 49 0 n/a 00:03:42.943 00:03:42.943 Elapsed time = 0.000 seconds 00:03:42.943 00:03:42.943 real 0m0.009s 00:03:42.943 user 0m0.007s 00:03:42.943 sys 0m0.006s 00:03:42.943 02:28:29 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.943 02:28:29 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 END TEST unittest_nvmf_transport 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:42.943 02:28:29 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 START TEST unittest_rdma 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:42.943 00:03:42.943 00:03:42.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.943 http://cunit.sourceforge.net/ 00:03:42.943 00:03:42.943 00:03:42.943 Suite: rdma_common 00:03:42.943 Test: test_spdk_rdma_pd ...[2024-07-25 02:28:29.720745] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:42.943 [2024-07-25 02:28:29.721165] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:42.943 passed 00:03:42.943 00:03:42.943 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.943 suites 1 1 n/a 0 0 00:03:42.943 tests 1 1 1 0 0 00:03:42.943 asserts 31 31 31 0 n/a 00:03:42.943 00:03:42.943 Elapsed time = 0.000 seconds 00:03:42.943 00:03:42.943 real 0m0.008s 00:03:42.943 user 0m0.008s 00:03:42.943 sys 0m0.001s 00:03:42.943 02:28:29 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.943 02:28:29 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 END TEST unittest_rdma 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:42.943 02:28:29 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.943 02:28:29 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.943 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:42.943 ************************************ 00:03:42.943 START TEST unittest_nvmf 00:03:42.943 ************************************ 00:03:42.943 02:28:29 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:03:42.943 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:42.943 00:03:42.943 00:03:42.943 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.943 http://cunit.sourceforge.net/ 00:03:42.943 00:03:42.943 00:03:42.943 Suite: nvmf 00:03:42.943 Test: test_get_log_page ...[2024-07-25 02:28:29.787490] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:42.943 passed 00:03:42.943 Test: test_process_fabrics_cmd ...passed 00:03:42.943 Test: test_connect ...[2024-07-25 02:28:29.787882] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:03:42.943 [2024-07-25 02:28:29.788021] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:42.943 [2024-07-25 02:28:29.788050] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:42.943 [2024-07-25 02:28:29.788076] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:42.943 [2024-07-25 02:28:29.788099] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:42.943 [2024-07-25 02:28:29.788121] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:42.943 [2024-07-25 02:28:29.788143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:42.943 [2024-07-25 02:28:29.788165] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:42.943 [2024-07-25 02:28:29.788187] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:42.943 [2024-07-25 02:28:29.788217] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:42.943 [2024-07-25 02:28:29.788244] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:42.943 [2024-07-25 02:28:29.788286] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:42.943 [2024-07-25 02:28:29.788312] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:42.943 [2024-07-25 02:28:29.788337] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:42.943 [2024-07-25 02:28:29.788362] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:42.943 [2024-07-25 02:28:29.788396] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:03:42.943 [2024-07-25 02:28:29.788431] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:03:42.943 passed 00:03:42.943 Test: test_get_ns_id_desc_list ...[2024-07-25 02:28:29.788456] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:42.943 passed 00:03:42.943 Test: test_identify_ns ...[2024-07-25 02:28:29.788542] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:42.943 [2024-07-25 02:28:29.788661] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:42.943 [2024-07-25 02:28:29.788729] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:42.944 passed 00:03:42.944 Test: test_identify_ns_iocs_specific ...[2024-07-25 02:28:29.788809] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:42.944 [2024-07-25 02:28:29.788926] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:42.944 passed 00:03:42.944 Test: test_reservation_write_exclusive ...passed 00:03:42.944 Test: test_reservation_exclusive_access ...passed 00:03:42.944 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:42.944 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:42.944 Test: test_reservation_notification_log_page ...passed 00:03:42.944 Test: test_get_dif_ctx ...passed 00:03:42.944 Test: test_set_get_features ...[2024-07-25 02:28:29.789110] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:42.944 [2024-07-25 02:28:29.789140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:42.944 passed 00:03:42.944 Test: test_identify_ctrlr ...[2024-07-25 02:28:29.789161] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:42.944 [2024-07-25 02:28:29.789181] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:42.944 passed 00:03:42.944 Test: test_identify_ctrlr_iocs_specific ...passed 00:03:42.944 Test: test_custom_admin_cmd ...passed 00:03:42.944 Test: test_fused_compare_and_write ...[2024-07-25 02:28:29.789377] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4249:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:42.944 [2024-07-25 02:28:29.789411] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:42.944 [2024-07-25 02:28:29.789430] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4256:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:42.944 passed 00:03:42.944 Test: test_multi_async_event_reqs ...passed 00:03:42.944 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:42.944 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:42.944 Test: test_multi_async_events ...passed 00:03:42.944 Test: test_rae ...passed 00:03:42.944 Test: test_nvmf_ctrlr_create_destruct ...passed 00:03:42.944 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:42.944 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:03:42.944 Test: test_zcopy_read ...passed 00:03:42.944 Test: test_zcopy_write ...passed 00:03:42.944 Test: test_nvmf_property_set ...passed 00:03:42.944 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-25 02:28:29.789547] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:03:42.944 [2024-07-25 02:28:29.789571] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:03:42.944 [2024-07-25 02:28:29.789619] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:42.944 [2024-07-25 02:28:29.789638] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:42.944 passed 00:03:42.944 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-25 02:28:29.789664] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1970:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:42.944 [2024-07-25 02:28:29.789684] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1976:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:42.944 [2024-07-25 02:28:29.789702] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:42.944 [2024-07-25 02:28:29.789719] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1988:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:42.944 passed 00:03:42.944 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:03:42.944 Test: test_nvmf_check_qpair_active ...[2024-07-25 02:28:29.789760] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4742:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:03:42.944 [2024-07-25 02:28:29.789779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:03:42.944 [2024-07-25 02:28:29.789798] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:03:42.944 [2024-07-25 02:28:29.789816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:03:42.944 passed 00:03:42.944 00:03:42.944 [2024-07-25 02:28:29.789833] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4768:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:03:42.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.944 suites 1 1 n/a 0 0 00:03:42.944 tests 32 32 32 0 0 00:03:42.944 asserts 983 983 983 0 n/a 00:03:42.944 00:03:42.944 Elapsed time = 0.000 seconds 00:03:42.944 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:42.944 00:03:42.944 00:03:42.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.944 http://cunit.sourceforge.net/ 00:03:42.944 00:03:42.944 00:03:42.944 Suite: nvmf 00:03:42.944 Test: test_get_rw_params ...passed 00:03:42.944 Test: test_get_rw_ext_params ...passed 00:03:42.944 Test: test_lba_in_range ...passed 00:03:42.944 Test: test_get_dif_ctx ...passed 00:03:42.944 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:42.944 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-25 02:28:29.800201] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:42.944 [2024-07-25 02:28:29.800533] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:42.944 [2024-07-25 02:28:29.800583] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:42.944 passed 00:03:42.944 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:42.944 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-25 02:28:29.800613] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:42.944 [2024-07-25 02:28:29.800636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:42.944 [2024-07-25 02:28:29.800662] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:42.944 [2024-07-25 02:28:29.800683] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:42.944 [2024-07-25 02:28:29.800716] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:42.944 [2024-07-25 02:28:29.800735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:42.944 passed 00:03:42.944 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:42.944 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:42.944 00:03:42.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.944 suites 1 1 n/a 0 0 00:03:42.944 tests 10 10 10 0 0 00:03:42.944 asserts 159 159 159 0 n/a 00:03:42.944 00:03:42.944 Elapsed time = 0.000 seconds 00:03:42.944 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:42.944 00:03:42.944 00:03:42.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.944 http://cunit.sourceforge.net/ 00:03:42.944 00:03:42.944 00:03:42.944 Suite: nvmf 00:03:42.944 Test: test_discovery_log ...passed 00:03:42.944 Test: test_discovery_log_with_filters ...passed 00:03:42.944 00:03:42.944 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.944 suites 1 1 n/a 0 0 00:03:42.944 tests 2 2 2 0 0 00:03:42.944 asserts 238 238 238 0 n/a 00:03:42.944 00:03:42.944 Elapsed time = 0.000 seconds 00:03:42.944 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:42.944 00:03:42.944 00:03:42.944 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.944 http://cunit.sourceforge.net/ 00:03:42.944 00:03:42.944 00:03:42.944 Suite: nvmf 00:03:42.944 Test: nvmf_test_create_subsystem ...[2024-07-25 02:28:29.818679] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:42.944 [2024-07-25 02:28:29.819063] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:03:42.944 [2024-07-25 02:28:29.819121] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:42.944 [2024-07-25 02:28:29.819146] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:03:42.944 [2024-07-25 02:28:29.819169] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:42.944 [2024-07-25 02:28:29.819188] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:03:42.945 [2024-07-25 02:28:29.819228] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:42.945 [2024-07-25 02:28:29.819247] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:03:42.945 [2024-07-25 02:28:29.819269] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:42.945 [2024-07-25 02:28:29.819289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:03:42.945 [2024-07-25 02:28:29.819309] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:42.945 [2024-07-25 02:28:29.819330] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:03:42.945 [2024-07-25 02:28:29.819364] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:42.945 [2024-07-25 02:28:29.819386] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:03:42.945 [2024-07-25 02:28:29.819454] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:42.945 [2024-07-25 02:28:29.819476] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:03:42.945 [2024-07-25 02:28:29.819522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:42.945 [2024-07-25 02:28:29.819559] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:03:42.945 [2024-07-25 02:28:29.819581] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:42.945 [2024-07-25 02:28:29.819601] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:42.945 [2024-07-25 02:28:29.819637] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:42.945 [2024-07-25 02:28:29.819657] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:42.945 passed 00:03:42.945 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-25 02:28:29.819778] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:42.945 passed 00:03:42.945 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-25 02:28:29.819839] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2031:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:42.945 [2024-07-25 02:28:29.819887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2162:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:03:42.945 passed 00:03:42.945 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:42.945 Test: test_spdk_nvmf_ns_visible ...[2024-07-25 02:28:29.819941] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:03:42.945 passed 00:03:42.945 Test: test_reservation_register ...[2024-07-25 02:28:29.820083] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 [2024-07-25 02:28:29.820113] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3164:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:42.945 passed 00:03:42.945 Test: test_reservation_register_with_ptpl ...passed 00:03:42.945 Test: test_reservation_acquire_preempt_1 ...[2024-07-25 02:28:29.820477] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 passed 00:03:42.945 Test: test_reservation_acquire_release_with_ptpl ...passed 00:03:42.945 Test: test_reservation_release ...[2024-07-25 02:28:29.820802] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 passed 00:03:42.945 Test: test_reservation_unregister_notification ...passed 00:03:42.945 Test: test_reservation_release_notification ...[2024-07-25 02:28:29.820842] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 [2024-07-25 02:28:29.820873] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 passed 00:03:42.945 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:42.945 Test: test_reservation_clear_notification ...[2024-07-25 02:28:29.820923] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 [2024-07-25 02:28:29.820962] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 passed 00:03:42.945 Test: test_reservation_preempt_notification ...passed 00:03:42.945 Test: test_spdk_nvmf_ns_event ...[2024-07-25 02:28:29.821006] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3108:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:42.945 passed 00:03:42.945 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:42.945 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:42.945 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-25 02:28:29.821214] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:42.945 passed 00:03:42.945 Test: test_nvmf_ns_reservation_report ...[2024-07-25 02:28:29.821254] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:03:42.945 [2024-07-25 02:28:29.821310] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3470:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:42.945 passed 00:03:42.945 Test: test_nvmf_nqn_is_valid ...[2024-07-25 02:28:29.821371] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:42.945 [2024-07-25 02:28:29.821392] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:948fd8aa-4a2d-11ef-9c8e-7947904e259": uuid is not the correct length 00:03:42.945 passed 00:03:42.945 Test: test_nvmf_ns_reservation_restore ...[2024-07-25 02:28:29.821428] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:42.945 [2024-07-25 02:28:29.821496] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2663:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:42.945 passed 00:03:42.945 Test: test_nvmf_subsystem_state_change ...passed 00:03:42.945 Test: test_nvmf_reservation_custom_ops ...passed 00:03:42.945 00:03:42.945 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.945 suites 1 1 n/a 0 0 00:03:42.945 tests 24 24 24 0 0 00:03:42.945 asserts 499 499 499 0 n/a 00:03:42.945 00:03:42.945 Elapsed time = 0.008 seconds 00:03:42.945 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:42.945 00:03:42.945 00:03:42.945 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.945 http://cunit.sourceforge.net/ 00:03:42.945 00:03:42.945 00:03:42.945 Suite: nvmf 00:03:43.206 Test: test_nvmf_tcp_create ...[2024-07-25 02:28:29.837934] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 750:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:43.206 passed 00:03:43.206 Test: test_nvmf_tcp_destroy ...passed 00:03:43.206 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:43.206 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:43.206 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:43.206 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:43.206 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:43.206 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:03:43.206 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-07-25 02:28:29.853061] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853095] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853123] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853133] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853142] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 passed 00:03:43.206 Test: test_nvmf_tcp_icreq_handle ...passed 00:03:43.206 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:43.206 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-25 02:28:29.853168] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:43.206 [2024-07-25 02:28:29.853178] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853186] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f5f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853194] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2168:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:43.206 [2024-07-25 02:28:29.853203] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f5f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853211] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853219] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f5f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853229] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853245] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f5f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853261] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2564:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:43.206 [2024-07-25 02:28:29.853271] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 passed 00:03:43.206 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-25 02:28:29.853279] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f5f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853290] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2295:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x82048ee88 00:03:43.206 [2024-07-25 02:28:29.853299] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853307] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853317] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2354:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x82048f6f8 00:03:43.206 [2024-07-25 02:28:29.853326] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853342] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2305:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:43.206 [2024-07-25 02:28:29.853351] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853359] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853368] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2344:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:43.206 [2024-07-25 02:28:29.853376] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853385] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853394] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853402] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853411] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853442] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853450] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 passed 00:03:43.206 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-25 02:28:29.853459] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853468] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853477] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853485] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 [2024-07-25 02:28:29.853495] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1128:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.206 [2024-07-25 02:28:29.853503] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1654:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82048f6f8 is same with the state(5) to be set 00:03:43.206 passed 00:03:43.206 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:03:43.206 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:03:43.206 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-25 02:28:29.860530] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:43.206 [2024-07-25 02:28:29.860566] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:43.206 [2024-07-25 02:28:29.860717] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:43.206 [2024-07-25 02:28:29.860739] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:43.206 passed 00:03:43.206 00:03:43.206 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.206 suites 1 1 n/a 0 0 00:03:43.206 tests 17 17 17 0 0 00:03:43.206 asserts 222 222 222 0 n/a 00:03:43.206 00:03:43.206 Elapsed time = 0.031 seconds 00:03:43.206 [2024-07-25 02:28:29.860796] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:43.206 [2024-07-25 02:28:29.860804] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:43.206 02:28:29 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:43.206 00:03:43.206 00:03:43.206 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.206 http://cunit.sourceforge.net/ 00:03:43.206 00:03:43.206 00:03:43.206 Suite: nvmf 00:03:43.206 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:43.206 00:03:43.206 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.206 suites 1 1 n/a 0 0 00:03:43.206 tests 1 1 1 0 0 00:03:43.206 asserts 17 17 17 0 n/a 00:03:43.206 00:03:43.206 Elapsed time = 0.000 seconds 00:03:43.206 00:03:43.206 real 0m0.098s 00:03:43.206 user 0m0.039s 00:03:43.206 sys 0m0.058s 00:03:43.206 02:28:29 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.206 02:28:29 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:03:43.206 ************************************ 00:03:43.206 END TEST unittest_nvmf 00:03:43.206 ************************************ 00:03:43.206 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.206 02:28:29 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.206 02:28:29 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.207 02:28:29 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.207 ************************************ 00:03:43.207 START TEST unittest_nvmf_rdma 00:03:43.207 ************************************ 00:03:43.207 02:28:29 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:43.207 00:03:43.207 00:03:43.207 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.207 http://cunit.sourceforge.net/ 00:03:43.207 00:03:43.207 00:03:43.207 Suite: nvmf 00:03:43.207 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-25 02:28:29.937626] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:43.207 [2024-07-25 02:28:29.938018] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:43.207 [2024-07-25 02:28:29.938063] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:43.207 passed 00:03:43.207 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:43.207 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:43.207 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:43.207 Test: test_nvmf_rdma_opts_init ...passed 00:03:43.207 Test: test_nvmf_rdma_request_free_data ...passed 00:03:43.207 Test: test_nvmf_rdma_resources_create ...passed 00:03:43.207 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:43.207 Test: test_nvmf_rdma_resize_cq ...[2024-07-25 02:28:29.939420] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:43.207 Using CQ of insufficient size may lead to CQ overrun 00:03:43.207 [2024-07-25 02:28:29.939453] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:43.207 [2024-07-25 02:28:29.939534] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:43.207 passed 00:03:43.207 00:03:43.207 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.207 suites 1 1 n/a 0 0 00:03:43.207 tests 9 9 9 0 0 00:03:43.207 asserts 579 579 579 0 n/a 00:03:43.207 00:03:43.207 Elapsed time = 0.008 seconds 00:03:43.207 00:03:43.207 real 0m0.011s 00:03:43.207 user 0m0.015s 00:03:43.207 sys 0m0.006s 00:03:43.207 02:28:29 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.207 02:28:29 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:43.207 ************************************ 00:03:43.207 END TEST unittest_nvmf_rdma 00:03:43.207 ************************************ 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.207 02:28:29 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.207 02:28:29 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.207 02:28:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.207 ************************************ 00:03:43.207 START TEST unittest_scsi 00:03:43.207 ************************************ 00:03:43.207 02:28:29 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:03:43.207 02:28:29 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:43.207 00:03:43.207 00:03:43.207 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.207 http://cunit.sourceforge.net/ 00:03:43.207 00:03:43.207 00:03:43.207 Suite: dev_suite 00:03:43.207 Test: dev_destruct_null_dev ...passed 00:03:43.207 Test: dev_destruct_zero_luns ...passed 00:03:43.207 Test: dev_destruct_null_lun ...passed 00:03:43.207 Test: dev_destruct_success ...passed 00:03:43.207 Test: dev_construct_num_luns_zero ...[2024-07-25 02:28:30.001002] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:43.207 passed 00:03:43.207 Test: dev_construct_no_lun_zero ...[2024-07-25 02:28:30.001434] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:43.207 passed 00:03:43.207 Test: dev_construct_null_lun ...passed[2024-07-25 02:28:30.001469] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:43.207 00:03:43.207 Test: dev_construct_name_too_long ...[2024-07-25 02:28:30.001515] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:43.207 passed 00:03:43.207 Test: dev_construct_success ...passed 00:03:43.207 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:43.207 Test: dev_queue_mgmt_task_success ...passed 00:03:43.207 Test: dev_queue_task_success ...passed 00:03:43.207 Test: dev_stop_success ...passed 00:03:43.207 Test: dev_add_port_max_ports ...[2024-07-25 02:28:30.001590] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:43.207 passed 00:03:43.207 Test: dev_add_port_construct_failure1 ...passed 00:03:43.207 Test: dev_add_port_construct_failure2 ...passed 00:03:43.207 Test: dev_add_port_success1 ...passed 00:03:43.207 Test: dev_add_port_success2 ...[2024-07-25 02:28:30.001615] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:43.207 [2024-07-25 02:28:30.001640] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:43.207 passed 00:03:43.207 Test: dev_add_port_success3 ...passed 00:03:43.207 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:43.207 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:43.207 Test: dev_find_port_by_id_success ...passed 00:03:43.207 Test: dev_add_lun_bdev_not_found ...passed 00:03:43.207 Test: dev_add_lun_no_free_lun_id ...[2024-07-25 02:28:30.002011] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:43.207 passed 00:03:43.207 Test: dev_add_lun_success1 ...passed 00:03:43.207 Test: dev_add_lun_success2 ...passed 00:03:43.207 Test: dev_check_pending_tasks ...passed 00:03:43.207 Test: dev_iterate_luns ...passed 00:03:43.207 Test: dev_find_free_lun ...passed 00:03:43.207 00:03:43.207 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.207 suites 1 1 n/a 0 0 00:03:43.207 tests 29 29 29 0 0 00:03:43.207 asserts 97 97 97 0 n/a 00:03:43.207 00:03:43.207 Elapsed time = 0.000 seconds 00:03:43.207 02:28:30 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:43.207 00:03:43.207 00:03:43.207 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.207 http://cunit.sourceforge.net/ 00:03:43.207 00:03:43.207 00:03:43.207 Suite: lun_suite 00:03:43.207 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-25 02:28:30.013394] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:43.207 passed 00:03:43.207 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-25 02:28:30.013771] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:43.207 passed 00:03:43.207 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:43.207 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:43.207 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-25 02:28:30.013818] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:43.207 passed 00:03:43.207 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:43.207 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:43.207 Test: lun_append_task_null_lun_not_supported ...passed 00:03:43.207 Test: lun_execute_scsi_task_pending ...passed 00:03:43.207 Test: lun_execute_scsi_task_complete ...passed 00:03:43.207 Test: lun_execute_scsi_task_resize ...passed 00:03:43.207 Test: lun_destruct_success ...passed 00:03:43.207 Test: lun_construct_null_ctx ...[2024-07-25 02:28:30.013878] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:43.207 passed 00:03:43.207 Test: lun_construct_success ...passed 00:03:43.207 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:43.207 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:43.207 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:43.207 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:43.207 00:03:43.207 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.207 suites 1 1 n/a 0 0 00:03:43.207 tests 18 18 18 0 0 00:03:43.207 asserts 153 153 153 0 n/a 00:03:43.207 00:03:43.207 Elapsed time = 0.000 seconds 00:03:43.207 02:28:30 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:43.207 00:03:43.207 00:03:43.207 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.207 http://cunit.sourceforge.net/ 00:03:43.207 00:03:43.207 00:03:43.207 Suite: scsi_suite 00:03:43.207 Test: scsi_init ...passed 00:03:43.207 00:03:43.207 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.208 suites 1 1 n/a 0 0 00:03:43.208 tests 1 1 1 0 0 00:03:43.208 asserts 1 1 1 0 n/a 00:03:43.208 00:03:43.208 Elapsed time = 0.000 seconds 00:03:43.208 02:28:30 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:43.208 00:03:43.208 00:03:43.208 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.208 http://cunit.sourceforge.net/ 00:03:43.208 00:03:43.208 00:03:43.208 Suite: translation_suite 00:03:43.208 Test: mode_select_6_test ...passed 00:03:43.208 Test: mode_select_6_test2 ...passed 00:03:43.208 Test: mode_sense_6_test ...passed 00:03:43.208 Test: mode_sense_10_test ...passed 00:03:43.208 Test: inquiry_evpd_test ...passed 00:03:43.208 Test: inquiry_standard_test ...passed 00:03:43.208 Test: inquiry_overflow_test ...passed 00:03:43.208 Test: task_complete_test ...passed 00:03:43.208 Test: lba_range_test ...passed 00:03:43.208 Test: xfer_len_test ...[2024-07-25 02:28:30.032563] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:43.208 passed 00:03:43.208 Test: xfer_test ...passed 00:03:43.208 Test: scsi_name_padding_test ...passed 00:03:43.208 Test: get_dif_ctx_test ...passed 00:03:43.208 Test: unmap_split_test ...passed 00:03:43.208 00:03:43.208 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.208 suites 1 1 n/a 0 0 00:03:43.208 tests 14 14 14 0 0 00:03:43.208 asserts 1205 1205 1205 0 n/a 00:03:43.208 00:03:43.208 Elapsed time = 0.000 seconds 00:03:43.208 02:28:30 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:43.208 00:03:43.208 00:03:43.208 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.208 http://cunit.sourceforge.net/ 00:03:43.208 00:03:43.208 00:03:43.208 Suite: reservation_suite 00:03:43.208 Test: test_reservation_register ...[2024-07-25 02:28:30.041552] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 passed 00:03:43.208 Test: test_reservation_reserve ...[2024-07-25 02:28:30.041966] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 [2024-07-25 02:28:30.042010] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:43.208 [2024-07-25 02:28:30.042045] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:43.208 passed 00:03:43.208 Test: test_all_registrant_reservation_reserve ...[2024-07-25 02:28:30.042081] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 passed 00:03:43.208 Test: test_all_registrant_reservation_access ...[2024-07-25 02:28:30.042136] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 [2024-07-25 02:28:30.042165] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:03:43.208 [2024-07-25 02:28:30.042193] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:03:43.208 passed 00:03:43.208 Test: test_reservation_preempt_non_all_regs ...[2024-07-25 02:28:30.042223] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 [2024-07-25 02:28:30.042246] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:43.208 passed 00:03:43.208 Test: test_reservation_preempt_all_regs ...[2024-07-25 02:28:30.042293] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 passed 00:03:43.208 Test: test_reservation_cmds_conflict ...[2024-07-25 02:28:30.042340] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 [2024-07-25 02:28:30.042365] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:43.208 [2024-07-25 02:28:30.042395] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:43.208 [2024-07-25 02:28:30.042417] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:43.208 [2024-07-25 02:28:30.042436] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:43.208 [2024-07-25 02:28:30.042457] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:43.208 passed 00:03:43.208 Test: test_scsi2_reserve_release ...passed 00:03:43.208 Test: test_pr_with_scsi2_reserve_release ...[2024-07-25 02:28:30.042512] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.208 passed 00:03:43.208 00:03:43.208 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.208 suites 1 1 n/a 0 0 00:03:43.208 tests 9 9 9 0 0 00:03:43.208 asserts 344 344 344 0 n/a 00:03:43.208 00:03:43.208 Elapsed time = 0.000 seconds 00:03:43.208 00:03:43.208 real 0m0.051s 00:03:43.208 user 0m0.024s 00:03:43.208 sys 0m0.035s 00:03:43.208 02:28:30 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.208 02:28:30 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:03:43.208 ************************************ 00:03:43.208 END TEST unittest_scsi 00:03:43.208 ************************************ 00:03:43.208 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.208 02:28:30 unittest -- unit/unittest.sh@278 -- # uname -s 00:03:43.208 02:28:30 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:03:43.208 02:28:30 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:43.208 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.208 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.208 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.208 ************************************ 00:03:43.208 START TEST unittest_thread 00:03:43.208 ************************************ 00:03:43.208 02:28:30 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:43.471 00:03:43.471 00:03:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.471 http://cunit.sourceforge.net/ 00:03:43.471 00:03:43.471 00:03:43.471 Suite: io_channel 00:03:43.471 Test: thread_alloc ...passed 00:03:43.471 Test: thread_send_msg ...passed 00:03:43.471 Test: thread_poller ...passed 00:03:43.471 Test: poller_pause ...passed 00:03:43.471 Test: thread_for_each ...passed 00:03:43.471 Test: for_each_channel_remove ...passed 00:03:43.471 Test: for_each_channel_unreg ...[2024-07-25 02:28:30.105996] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x820f96204 already registered (old:0x280675c67000 new:0x280675c67180) 00:03:43.471 passed 00:03:43.471 Test: thread_name ...passed 00:03:43.471 Test: channel ...[2024-07-25 02:28:30.107037] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x2287f8 00:03:43.471 passed 00:03:43.471 Test: channel_destroy_races ...passed 00:03:43.471 Test: thread_exit_test ...[2024-07-25 02:28:30.107977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x280675c2ca80 got timeout, and move it to the exited state forcefully 00:03:43.471 passed 00:03:43.471 Test: thread_update_stats_test ...passed 00:03:43.471 Test: nested_channel ...passed 00:03:43.471 Test: device_unregister_and_thread_exit_race ...passed 00:03:43.471 Test: cache_closest_timed_poller ...passed 00:03:43.471 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:43.471 Test: io_device_lookup ...passed 00:03:43.471 Test: spdk_spin ...[2024-07-25 02:28:30.110042] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:43.471 [2024-07-25 02:28:30.110070] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820f96200 00:03:43.471 [2024-07-25 02:28:30.110091] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:43.471 [2024-07-25 02:28:30.110369] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:43.471 [2024-07-25 02:28:30.110398] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820f96200 00:03:43.471 [2024-07-25 02:28:30.110416] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:43.471 [2024-07-25 02:28:30.110433] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820f96200 00:03:43.471 [2024-07-25 02:28:30.110451] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:43.471 [2024-07-25 02:28:30.110468] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820f96200 00:03:43.471 [2024-07-25 02:28:30.110487] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:43.471 [2024-07-25 02:28:30.110506] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x820f96200 00:03:43.471 passed 00:03:43.471 Test: for_each_channel_and_thread_exit_race ...passed 00:03:43.471 Test: for_each_thread_and_thread_exit_race ...passed 00:03:43.471 00:03:43.471 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.471 suites 1 1 n/a 0 0 00:03:43.471 tests 20 20 20 0 0 00:03:43.471 asserts 409 409 409 0 n/a 00:03:43.471 00:03:43.471 Elapsed time = 0.008 seconds 00:03:43.471 00:03:43.471 real 0m0.019s 00:03:43.471 user 0m0.012s 00:03:43.471 sys 0m0.008s 00:03:43.471 02:28:30 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.471 02:28:30 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:03:43.471 ************************************ 00:03:43.471 END TEST unittest_thread 00:03:43.471 ************************************ 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.471 02:28:30 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.471 ************************************ 00:03:43.471 START TEST unittest_iobuf 00:03:43.471 ************************************ 00:03:43.471 02:28:30 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:43.471 00:03:43.471 00:03:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.471 http://cunit.sourceforge.net/ 00:03:43.471 00:03:43.471 00:03:43.471 Suite: io_channel 00:03:43.471 Test: iobuf ...passed 00:03:43.471 Test: iobuf_cache ...[2024-07-25 02:28:30.168225] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:43.471 [2024-07-25 02:28:30.168577] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.471 [2024-07-25 02:28:30.168644] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:43.471 [2024-07-25 02:28:30.168678] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.471 [2024-07-25 02:28:30.168705] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:43.471 [2024-07-25 02:28:30.168726] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.471 passed 00:03:43.471 Test: iobuf_priority ...passed 00:03:43.471 00:03:43.471 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.471 suites 1 1 n/a 0 0 00:03:43.471 tests 3 3 3 0 0 00:03:43.471 asserts 131 131 131 0 n/a 00:03:43.471 00:03:43.471 Elapsed time = 0.008 seconds 00:03:43.471 00:03:43.471 real 0m0.010s 00:03:43.471 user 0m0.010s 00:03:43.471 sys 0m0.001s 00:03:43.471 02:28:30 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.471 02:28:30 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:03:43.471 ************************************ 00:03:43.471 END TEST unittest_iobuf 00:03:43.471 ************************************ 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.471 02:28:30 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.471 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.471 ************************************ 00:03:43.471 START TEST unittest_util 00:03:43.471 ************************************ 00:03:43.471 02:28:30 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:03:43.471 02:28:30 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:43.471 00:03:43.471 00:03:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.471 http://cunit.sourceforge.net/ 00:03:43.471 00:03:43.471 00:03:43.471 Suite: base64 00:03:43.471 Test: test_base64_get_encoded_strlen ...passed 00:03:43.471 Test: test_base64_get_decoded_len ...passed 00:03:43.471 Test: test_base64_encode ...passed 00:03:43.471 Test: test_base64_decode ...passed 00:03:43.471 Test: test_base64_urlsafe_encode ...passed 00:03:43.471 Test: test_base64_urlsafe_decode ...passed 00:03:43.471 00:03:43.471 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.471 suites 1 1 n/a 0 0 00:03:43.471 tests 6 6 6 0 0 00:03:43.471 asserts 112 112 112 0 n/a 00:03:43.471 00:03:43.471 Elapsed time = 0.000 seconds 00:03:43.471 02:28:30 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:43.471 00:03:43.471 00:03:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.471 http://cunit.sourceforge.net/ 00:03:43.471 00:03:43.471 00:03:43.471 Suite: bit_array 00:03:43.471 Test: test_1bit ...passed 00:03:43.471 Test: test_64bit ...passed 00:03:43.471 Test: test_find ...passed 00:03:43.472 Test: test_resize ...passed 00:03:43.472 Test: test_errors ...passed 00:03:43.472 Test: test_count ...passed 00:03:43.472 Test: test_mask_store_load ...passed 00:03:43.472 Test: test_mask_clear ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 8 8 8 0 0 00:03:43.472 asserts 5075 5075 5075 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: cpuset 00:03:43.472 Test: test_cpuset ...passed 00:03:43.472 Test: test_cpuset_parse ...[2024-07-25 02:28:30.244798] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:43.472 [2024-07-25 02:28:30.245169] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:43.472 [2024-07-25 02:28:30.245213] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:43.472 [2024-07-25 02:28:30.245241] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:43.472 [2024-07-25 02:28:30.245263] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:43.472 [2024-07-25 02:28:30.245284] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:43.472 [2024-07-25 02:28:30.245305] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:43.472 [2024-07-25 02:28:30.245346] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:43.472 passed 00:03:43.472 Test: test_cpuset_fmt ...passed 00:03:43.472 Test: test_cpuset_foreach ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 4 4 4 0 0 00:03:43.472 asserts 90 90 90 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: crc16 00:03:43.472 Test: test_crc16_t10dif ...passed 00:03:43.472 Test: test_crc16_t10dif_seed ...passed 00:03:43.472 Test: test_crc16_t10dif_copy ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 3 3 3 0 0 00:03:43.472 asserts 5 5 5 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: crc32_ieee 00:03:43.472 Test: test_crc32_ieee ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 1 1 1 0 0 00:03:43.472 asserts 1 1 1 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: crc32c 00:03:43.472 Test: test_crc32c ...passed 00:03:43.472 Test: test_crc32c_nvme ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 2 2 2 0 0 00:03:43.472 asserts 16 16 16 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: crc64 00:03:43.472 Test: test_crc64_nvme ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 1 1 1 0 0 00:03:43.472 asserts 4 4 4 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: string 00:03:43.472 Test: test_parse_ip_addr ...passed 00:03:43.472 Test: test_str_chomp ...passed 00:03:43.472 Test: test_parse_capacity ...passed 00:03:43.472 Test: test_sprintf_append_realloc ...passed 00:03:43.472 Test: test_strtol ...passed 00:03:43.472 Test: test_strtoll ...passed 00:03:43.472 Test: test_strarray ...passed 00:03:43.472 Test: test_strcpy_replace ...passed 00:03:43.472 00:03:43.472 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.472 suites 1 1 n/a 0 0 00:03:43.472 tests 8 8 8 0 0 00:03:43.472 asserts 161 161 161 0 n/a 00:03:43.472 00:03:43.472 Elapsed time = 0.000 seconds 00:03:43.472 02:28:30 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:43.472 00:03:43.472 00:03:43.472 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.472 http://cunit.sourceforge.net/ 00:03:43.472 00:03:43.472 00:03:43.472 Suite: dif 00:03:43.472 Test: dif_generate_and_verify_test ...[2024-07-25 02:28:30.297776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.472 [2024-07-25 02:28:30.298254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.472 [2024-07-25 02:28:30.298372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.472 [2024-07-25 02:28:30.298500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.472 [2024-07-25 02:28:30.298612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.472 [2024-07-25 02:28:30.298730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.472 passed 00:03:43.472 Test: dif_disable_check_test ...[2024-07-25 02:28:30.299107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.472 [2024-07-25 02:28:30.299228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.472 [2024-07-25 02:28:30.299348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.472 passed 00:03:43.472 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-25 02:28:30.299747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:43.472 [2024-07-25 02:28:30.299860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:43.472 [2024-07-25 02:28:30.299965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:43.472 [2024-07-25 02:28:30.300100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:43.472 [2024-07-25 02:28:30.300217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.472 [2024-07-25 02:28:30.300929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.472 passed 00:03:43.472 Test: dif_apptag_mask_test ...[2024-07-25 02:28:30.301069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:43.473 [2024-07-25 02:28:30.301181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:43.473 passed 00:03:43.473 Test: dif_sec_8_md_8_error_test ...[2024-07-25 02:28:30.301251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:03:43.473 passed 00:03:43.473 Test: dif_sec_512_md_0_error_test ...passed 00:03:43.473 Test: dif_sec_512_md_16_error_test ...[2024-07-25 02:28:30.301275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.473 [2024-07-25 02:28:30.301313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.473 [2024-07-25 02:28:30.301347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.473 passed 00:03:43.473 Test: dif_sec_4096_md_0_8_error_test ...[2024-07-25 02:28:30.301384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.473 [2024-07-25 02:28:30.301404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.473 [2024-07-25 02:28:30.301423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.473 [2024-07-25 02:28:30.301441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.473 passed 00:03:43.473 Test: dif_sec_4100_md_128_error_test ...[2024-07-25 02:28:30.301479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.473 passed 00:03:43.473 Test: dif_guard_seed_test ...passed 00:03:43.473 Test: dif_guard_value_test ...[2024-07-25 02:28:30.301517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.473 passed 00:03:43.473 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:43.473 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.473 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 02:28:30.311760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:03:43.473 [2024-07-25 02:28:30.312260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f621, Actual=fe21 00:03:43.473 [2024-07-25 02:28:30.312755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.313256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.313749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.473 [2024-07-25 02:28:30.314240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.473 [2024-07-25 02:28:30.314729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=5cb7 00:03:43.473 [2024-07-25 02:28:30.315217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=19d3 00:03:43.473 [2024-07-25 02:28:30.315700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:03:43.473 [2024-07-25 02:28:30.316199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574e60, Actual=38574660 00:03:43.473 [2024-07-25 02:28:30.316525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.316846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.317165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.473 [2024-07-25 02:28:30.317485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.473 [2024-07-25 02:28:30.317802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d06d6571 00:03:43.473 [2024-07-25 02:28:30.318116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=9c602e9a 00:03:43.473 [2024-07-25 02:28:30.318423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.473 [2024-07-25 02:28:30.318742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.473 [2024-07-25 02:28:30.319067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.319384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.319703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.473 [2024-07-25 02:28:30.320020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.473 [2024-07-25 02:28:30.320338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.473 [2024-07-25 02:28:30.320652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.473 passed 00:03:43.473 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-25 02:28:30.320837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:03:43.473 [2024-07-25 02:28:30.320879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:03:43.473 [2024-07-25 02:28:30.320927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.320969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.321013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.473 [2024-07-25 02:28:30.321069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.473 [2024-07-25 02:28:30.321118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5cb7 00:03:43.473 [2024-07-25 02:28:30.321152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=19d3 00:03:43.473 [2024-07-25 02:28:30.321192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:03:43.473 [2024-07-25 02:28:30.321242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:03:43.473 [2024-07-25 02:28:30.321294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.321341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.321386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.473 [2024-07-25 02:28:30.321435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.473 [2024-07-25 02:28:30.321483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d06d6571 00:03:43.473 [2024-07-25 02:28:30.321523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9c602e9a 00:03:43.473 [2024-07-25 02:28:30.321554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.473 [2024-07-25 02:28:30.321611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.473 [2024-07-25 02:28:30.321666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.321714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.321767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.473 [2024-07-25 02:28:30.321815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.473 [2024-07-25 02:28:30.321868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.473 [2024-07-25 02:28:30.321907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.473 passed 00:03:43.473 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-25 02:28:30.321949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:03:43.473 [2024-07-25 02:28:30.321990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:03:43.473 [2024-07-25 02:28:30.322039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.322088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.473 [2024-07-25 02:28:30.322134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.473 [2024-07-25 02:28:30.322187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.322238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5cb7 00:03:43.474 [2024-07-25 02:28:30.322281] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=19d3 00:03:43.474 [2024-07-25 02:28:30.322313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:03:43.474 [2024-07-25 02:28:30.322362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:03:43.474 [2024-07-25 02:28:30.322417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.322468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.322517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.322570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.322617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d06d6571 00:03:43.474 [2024-07-25 02:28:30.322648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9c602e9a 00:03:43.474 [2024-07-25 02:28:30.322685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.474 [2024-07-25 02:28:30.322732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.474 [2024-07-25 02:28:30.322778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.322819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.322872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.322931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.322972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.474 [2024-07-25 02:28:30.323008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.474 passed 00:03:43.474 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-25 02:28:30.323050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:03:43.474 [2024-07-25 02:28:30.323099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:03:43.474 [2024-07-25 02:28:30.323147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.323286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.323337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5cb7 00:03:43.474 [2024-07-25 02:28:30.323380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=19d3 00:03:43.474 [2024-07-25 02:28:30.323419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:03:43.474 [2024-07-25 02:28:30.323462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:03:43.474 [2024-07-25 02:28:30.323511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.323663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.323713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d06d6571 00:03:43.474 [2024-07-25 02:28:30.323751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9c602e9a 00:03:43.474 [2024-07-25 02:28:30.323783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.474 [2024-07-25 02:28:30.323835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.474 [2024-07-25 02:28:30.323883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.323982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.324035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.324083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.474 passed 00:03:43.474 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-25 02:28:30.324121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.474 [2024-07-25 02:28:30.324155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:03:43.474 [2024-07-25 02:28:30.324203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:03:43.474 [2024-07-25 02:28:30.324267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.324323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.324369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.324418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.324471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5cb7 00:03:43.474 [2024-07-25 02:28:30.324511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=19d3 00:03:43.474 passed 00:03:43.474 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-25 02:28:30.324553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:03:43.474 [2024-07-25 02:28:30.324594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:03:43.474 [2024-07-25 02:28:30.324654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.324705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.324755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.324799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.324851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d06d6571 00:03:43.474 [2024-07-25 02:28:30.324887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9c602e9a 00:03:43.474 [2024-07-25 02:28:30.324918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.474 [2024-07-25 02:28:30.324966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.474 [2024-07-25 02:28:30.325014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.325069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.325118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.325163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.474 [2024-07-25 02:28:30.325212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.474 passed 00:03:43.474 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-25 02:28:30.325244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.474 [2024-07-25 02:28:30.325277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:03:43.474 [2024-07-25 02:28:30.325325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:03:43.474 [2024-07-25 02:28:30.325381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.325430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.474 [2024-07-25 02:28:30.325483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.325532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.474 [2024-07-25 02:28:30.325584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5cb7 00:03:43.474 [2024-07-25 02:28:30.325624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=19d3 00:03:43.474 passed 00:03:43.475 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-25 02:28:30.325664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab75bed, Actual=1ab753ed 00:03:43.475 [2024-07-25 02:28:30.325712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574e60, Actual=38574660 00:03:43.475 [2024-07-25 02:28:30.325753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.325800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.325856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.475 [2024-07-25 02:28:30.325909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=858 00:03:43.475 [2024-07-25 02:28:30.325957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d06d6571 00:03:43.475 [2024-07-25 02:28:30.326000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=9c602e9a 00:03:43.475 [2024-07-25 02:28:30.326031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.475 [2024-07-25 02:28:30.326078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837aa66, Actual=88010a2d4837a266 00:03:43.475 [2024-07-25 02:28:30.326128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.326177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.326222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.475 [2024-07-25 02:28:30.326273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:03:43.475 [2024-07-25 02:28:30.326300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.475 passed 00:03:43.475 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-07-25 02:28:30.326327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ff6a0463dc8dacab 00:03:43.475 passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.475 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:43.475 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.475 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 02:28:30.330016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:03:43.475 [2024-07-25 02:28:30.330142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1086, Actual=1886 00:03:43.475 [2024-07-25 02:28:30.330264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.330389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.330509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.475 [2024-07-25 02:28:30.330629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.475 [2024-07-25 02:28:30.330749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=5cb7 00:03:43.475 [2024-07-25 02:28:30.330866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=bce5 00:03:43.475 [2024-07-25 02:28:30.330992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:03:43.475 [2024-07-25 02:28:30.331110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=b4f97855, Actual=b4f97055 00:03:43.475 [2024-07-25 02:28:30.331228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.331345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.331465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.475 [2024-07-25 02:28:30.331582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.475 [2024-07-25 02:28:30.331699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d06d6571 00:03:43.475 [2024-07-25 02:28:30.331817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=a13af0c5 00:03:43.475 [2024-07-25 02:28:30.331935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.475 [2024-07-25 02:28:30.332061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a540ae5e6af41aff, Actual=a540ae5e6af412ff 00:03:43.475 [2024-07-25 02:28:30.332185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.332299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.332419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.475 [2024-07-25 02:28:30.332537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.475 [2024-07-25 02:28:30.332655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.475 passed 00:03:43.475 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 02:28:30.332773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4ba20cc8062ade5f 00:03:43.475 [2024-07-25 02:28:30.332809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:03:43.475 [2024-07-25 02:28:30.332838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:03:43.475 [2024-07-25 02:28:30.332873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.332902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.332936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.475 [2024-07-25 02:28:30.332971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.475 [2024-07-25 02:28:30.332999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5cb7 00:03:43.475 [2024-07-25 02:28:30.333033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7532 00:03:43.475 [2024-07-25 02:28:30.333061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab75bed, Actual=1ab753ed 00:03:43.475 [2024-07-25 02:28:30.333093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2a49b007, Actual=2a49b807 00:03:43.475 [2024-07-25 02:28:30.333135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.333168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.475 [2024-07-25 02:28:30.333198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.475 [2024-07-25 02:28:30.333226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.475 [2024-07-25 02:28:30.333259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d06d6571 00:03:43.476 [2024-07-25 02:28:30.333294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=3f8a3897 00:03:43.476 [2024-07-25 02:28:30.333323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.476 [2024-07-25 02:28:30.333364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d1a14cb3b120ff02, Actual=d1a14cb3b120f702 00:03:43.476 [2024-07-25 02:28:30.333400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.333429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.333462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:03:43.476 [2024-07-25 02:28:30.333491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:03:43.476 [2024-07-25 02:28:30.333528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.476 passed 00:03:43.476 Test: dix_sec_0_md_8_error ...[2024-07-25 02:28:30.333557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3f43ee25ddfe3ba2 00:03:43.476 [2024-07-25 02:28:30.333565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 555:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed 00:03:43.476 passed 00:03:43.476 Test: dix_sec_512_md_0_error ...passed 00:03:43.476 Test: dix_sec_512_md_16_error ...passed 00:03:43.476 Test: dix_sec_4096_md_0_8_error ...[2024-07-25 02:28:30.333571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.476 [2024-07-25 02:28:30.333578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.476 [2024-07-25 02:28:30.333583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 566:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB 00:03:43.476 [2024-07-25 02:28:30.333589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.476 [2024-07-25 02:28:30.333594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.476 [2024-07-25 02:28:30.333599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.476 [2024-07-25 02:28:30.333604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 540:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.476 passed 00:03:43.476 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:03:43.476 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.476 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:43.476 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.476 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:43.476 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:43.476 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.476 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:43.476 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.476 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-25 02:28:30.337185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=f54c, Actual=fd4c 00:03:43.476 [2024-07-25 02:28:30.337311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1086, Actual=1886 00:03:43.476 [2024-07-25 02:28:30.337432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.337555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.337675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.476 [2024-07-25 02:28:30.337797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.476 [2024-07-25 02:28:30.337916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=5cb7 00:03:43.476 [2024-07-25 02:28:30.338037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=bce5 00:03:43.476 [2024-07-25 02:28:30.338155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab75bed, Actual=1ab753ed 00:03:43.476 [2024-07-25 02:28:30.338272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=b4f97855, Actual=b4f97055 00:03:43.476 [2024-07-25 02:28:30.338388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.338510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.338628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.476 [2024-07-25 02:28:30.338744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=861 00:03:43.476 [2024-07-25 02:28:30.338861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d06d6571 00:03:43.476 [2024-07-25 02:28:30.338978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=a13af0c5 00:03:43.476 [2024-07-25 02:28:30.339101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.476 [2024-07-25 02:28:30.339218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a540ae5e6af41aff, Actual=a540ae5e6af412ff 00:03:43.476 [2024-07-25 02:28:30.339336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.339454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.339573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.476 [2024-07-25 02:28:30.339690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=8000061 00:03:43.476 [2024-07-25 02:28:30.339809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.476 passed 00:03:43.476 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-25 02:28:30.339928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=4ba20cc8062ade5f 00:03:43.476 [2024-07-25 02:28:30.339969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:03:43.476 [2024-07-25 02:28:30.339999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:03:43.476 [2024-07-25 02:28:30.340027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.476 [2024-07-25 02:28:30.340122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.476 [2024-07-25 02:28:30.340150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5cb7 00:03:43.476 [2024-07-25 02:28:30.340186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=7532 00:03:43.476 [2024-07-25 02:28:30.340214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab75bed, Actual=1ab753ed 00:03:43.476 [2024-07-25 02:28:30.340250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2a49b007, Actual=2a49b807 00:03:43.476 [2024-07-25 02:28:30.340282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.476 [2024-07-25 02:28:30.340373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=859 00:03:43.476 [2024-07-25 02:28:30.340404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d06d6571 00:03:43.476 [2024-07-25 02:28:30.340438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=3f8a3897 00:03:43.476 [2024-07-25 02:28:30.340467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc28d3, Actual=a576a7728ecc20d3 00:03:43.476 [2024-07-25 02:28:30.340495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d1a14cb3b120ff02, Actual=d1a14cb3b120f702 00:03:43.476 [2024-07-25 02:28:30.340528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:03:43.476 [2024-07-25 02:28:30.340590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:03:43.476 [2024-07-25 02:28:30.340618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:03:43.476 [2024-07-25 02:28:30.340651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=cb38e83487ee1abe 00:03:43.476 passed 00:03:43.476 Test: set_md_interleave_iovs_test ...[2024-07-25 02:28:30.340685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3f43ee25ddfe3ba2 00:03:43.476 passed 00:03:43.476 Test: set_md_interleave_iovs_split_test ...passed 00:03:43.476 Test: dif_generate_stream_pi_16_test ...passed 00:03:43.476 Test: dif_generate_stream_test ...passed 00:03:43.476 Test: set_md_interleave_iovs_alignment_test ...passed 00:03:43.476 Test: dif_generate_split_test ...[2024-07-25 02:28:30.341273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1857:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:43.476 passed 00:03:43.476 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:43.476 Test: dif_verify_split_test ...passed 00:03:43.476 Test: dif_verify_stream_multi_segments_test ...passed 00:03:43.476 Test: update_crc32c_pi_16_test ...passed 00:03:43.476 Test: update_crc32c_test ...passed 00:03:43.477 Test: dif_update_crc32c_split_test ...passed 00:03:43.477 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:43.477 Test: get_range_with_md_test ...passed 00:03:43.477 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:43.477 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:43.477 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:43.477 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:43.477 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:43.477 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:43.477 Test: dif_generate_and_verify_unmap_test ...passed 00:03:43.477 Test: dif_pi_format_check_test ...passed 00:03:43.477 Test: dif_type_check_test ...passed 00:03:43.477 00:03:43.477 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.477 suites 1 1 n/a 0 0 00:03:43.477 tests 86 86 86 0 0 00:03:43.477 asserts 3605 3605 3605 0 n/a 00:03:43.477 00:03:43.477 Elapsed time = 0.055 seconds 00:03:43.477 02:28:30 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:43.737 00:03:43.737 00:03:43.737 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.737 http://cunit.sourceforge.net/ 00:03:43.737 00:03:43.737 00:03:43.737 Suite: iov 00:03:43.737 Test: test_single_iov ...passed 00:03:43.737 Test: test_simple_iov ...passed 00:03:43.737 Test: test_complex_iov ...passed 00:03:43.737 Test: test_iovs_to_buf ...passed 00:03:43.737 Test: test_buf_to_iovs ...passed 00:03:43.737 Test: test_memset ...passed 00:03:43.737 Test: test_iov_one ...passed 00:03:43.737 Test: test_iov_xfer ...passed 00:03:43.737 00:03:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.737 suites 1 1 n/a 0 0 00:03:43.737 tests 8 8 8 0 0 00:03:43.737 asserts 156 156 156 0 n/a 00:03:43.737 00:03:43.737 Elapsed time = 0.000 seconds 00:03:43.737 02:28:30 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:43.737 00:03:43.737 00:03:43.737 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.737 http://cunit.sourceforge.net/ 00:03:43.737 00:03:43.737 00:03:43.737 Suite: math 00:03:43.737 Test: test_serial_number_arithmetic ...passed 00:03:43.737 Suite: erase 00:03:43.737 Test: test_memset_s ...passed 00:03:43.737 00:03:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.737 suites 2 2 n/a 0 0 00:03:43.737 tests 2 2 2 0 0 00:03:43.737 asserts 18 18 18 0 n/a 00:03:43.737 00:03:43.737 Elapsed time = 0.000 seconds 00:03:43.737 02:28:30 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:43.737 00:03:43.737 00:03:43.737 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.737 http://cunit.sourceforge.net/ 00:03:43.737 00:03:43.737 00:03:43.737 Suite: pipe 00:03:43.737 Test: test_create_destroy ...passed 00:03:43.737 Test: test_write_get_buffer ...passed 00:03:43.737 Test: test_write_advance ...passed 00:03:43.737 Test: test_read_get_buffer ...passed 00:03:43.737 Test: test_read_advance ...passed 00:03:43.737 Test: test_data ...passed 00:03:43.737 00:03:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.737 suites 1 1 n/a 0 0 00:03:43.737 tests 6 6 6 0 0 00:03:43.737 asserts 251 251 251 0 n/a 00:03:43.737 00:03:43.737 Elapsed time = 0.000 seconds 00:03:43.737 02:28:30 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:43.737 00:03:43.737 00:03:43.737 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.737 http://cunit.sourceforge.net/ 00:03:43.737 00:03:43.737 00:03:43.737 Suite: xor 00:03:43.737 Test: test_xor_gen ...passed 00:03:43.737 00:03:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.737 suites 1 1 n/a 0 0 00:03:43.737 tests 1 1 1 0 0 00:03:43.737 asserts 17 17 17 0 n/a 00:03:43.737 00:03:43.737 Elapsed time = 0.000 seconds 00:03:43.737 00:03:43.737 real 0m0.169s 00:03:43.737 user 0m0.119s 00:03:43.737 sys 0m0.051s 00:03:43.737 02:28:30 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.737 02:28:30 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:03:43.737 ************************************ 00:03:43.737 END TEST unittest_util 00:03:43.737 ************************************ 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 START TEST unittest_dma 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:43.738 00:03:43.738 00:03:43.738 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.738 http://cunit.sourceforge.net/ 00:03:43.738 00:03:43.738 00:03:43.738 Suite: dma_suite 00:03:43.738 Test: test_dma ...[2024-07-25 02:28:30.446329] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:43.738 passed 00:03:43.738 00:03:43.738 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.738 suites 1 1 n/a 0 0 00:03:43.738 tests 1 1 1 0 0 00:03:43.738 asserts 54 54 54 0 n/a 00:03:43.738 00:03:43.738 Elapsed time = 0.000 seconds 00:03:43.738 00:03:43.738 real 0m0.008s 00:03:43.738 user 0m0.000s 00:03:43.738 sys 0m0.008s 00:03:43.738 02:28:30 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.738 02:28:30 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 END TEST unittest_dma 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 START TEST unittest_init 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:03:43.738 02:28:30 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:43.738 00:03:43.738 00:03:43.738 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.738 http://cunit.sourceforge.net/ 00:03:43.738 00:03:43.738 00:03:43.738 Suite: subsystem_suite 00:03:43.738 Test: subsystem_sort_test_depends_on_single ...passed 00:03:43.738 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:43.738 Test: subsystem_sort_test_missing_dependency ...[2024-07-25 02:28:30.508760] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:43.738 [2024-07-25 02:28:30.509109] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:43.738 passed 00:03:43.738 00:03:43.738 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.738 suites 1 1 n/a 0 0 00:03:43.738 tests 3 3 3 0 0 00:03:43.738 asserts 20 20 20 0 n/a 00:03:43.738 00:03:43.738 Elapsed time = 0.000 seconds 00:03:43.738 00:03:43.738 real 0m0.009s 00:03:43.738 user 0m0.001s 00:03:43.738 sys 0m0.009s 00:03:43.738 02:28:30 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.738 02:28:30 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 END TEST unittest_init 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 START TEST unittest_keyring 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:43.738 00:03:43.738 00:03:43.738 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.738 http://cunit.sourceforge.net/ 00:03:43.738 00:03:43.738 00:03:43.738 Suite: keyring 00:03:43.738 Test: test_keyring_add_remove ...[2024-07-25 02:28:30.573025] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:03:43.738 [2024-07-25 02:28:30.573409] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:03:43.738 passed 00:03:43.738 Test: test_keyring_get_put ...[2024-07-25 02:28:30.573457] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:03:43.738 passed 00:03:43.738 00:03:43.738 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.738 suites 1 1 n/a 0 0 00:03:43.738 tests 2 2 2 0 0 00:03:43.738 asserts 44 44 44 0 n/a 00:03:43.738 00:03:43.738 Elapsed time = 0.000 seconds 00:03:43.738 00:03:43.738 real 0m0.009s 00:03:43.738 user 0m0.008s 00:03:43.738 sys 0m0.001s 00:03:43.738 02:28:30 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.738 02:28:30 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 END TEST unittest_keyring 00:03:43.738 ************************************ 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:03:43.738 02:28:30 unittest -- unit/unittest.sh@305 -- # set +x 00:03:43.738 00:03:43.738 00:03:43.738 ===================== 00:03:43.738 All unit tests passed 00:03:43.738 ===================== 00:03:43.738 WARN: lcov not installed or SPDK built without coverage! 00:03:43.738 WARN: neither valgrind nor ASAN is enabled! 00:03:43.738 00:03:43.738 00:03:43.738 00:03:43.738 real 0m14.030s 00:03:43.738 user 0m11.179s 00:03:43.738 sys 0m1.567s 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:43.738 02:28:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:43.738 ************************************ 00:03:43.738 END TEST unittest 00:03:43.738 ************************************ 00:03:43.998 02:28:30 -- common/autotest_common.sh@1142 -- # return 0 00:03:43.998 02:28:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:43.998 02:28:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:43.998 02:28:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:43.998 02:28:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:43.998 02:28:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.998 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:43.998 02:28:30 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:43.998 02:28:30 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:43.998 02:28:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.998 02:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.998 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:43.998 ************************************ 00:03:43.998 START TEST env 00:03:43.998 ************************************ 00:03:43.998 02:28:30 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:43.998 * Looking for test storage... 00:03:43.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:43.998 02:28:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:43.998 02:28:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:43.998 02:28:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.998 02:28:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.998 ************************************ 00:03:43.998 START TEST env_memory 00:03:43.998 ************************************ 00:03:43.998 02:28:30 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:43.998 00:03:43.998 00:03:43.998 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.998 http://cunit.sourceforge.net/ 00:03:43.998 00:03:43.998 00:03:43.998 Suite: memory 00:03:44.258 Test: alloc and free memory map ...[2024-07-25 02:28:30.906712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.258 passed 00:03:44.258 Test: mem map translation ...[2024-07-25 02:28:30.916617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.258 [2024-07-25 02:28:30.916665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.258 [2024-07-25 02:28:30.916691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.258 [2024-07-25 02:28:30.916700] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.258 passed 00:03:44.258 Test: mem map registration ...[2024-07-25 02:28:30.925415] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:44.258 [2024-07-25 02:28:30.925452] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:44.258 passed 00:03:44.258 Test: mem map adjacent registrations ...passed 00:03:44.258 00:03:44.258 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.258 suites 1 1 n/a 0 0 00:03:44.258 tests 4 4 4 0 0 00:03:44.258 asserts 152 152 152 0 n/a 00:03:44.258 00:03:44.258 Elapsed time = 0.047 seconds 00:03:44.258 00:03:44.258 real 0m0.051s 00:03:44.258 user 0m0.042s 00:03:44.258 sys 0m0.009s 00:03:44.258 02:28:30 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.258 02:28:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.258 ************************************ 00:03:44.258 END TEST env_memory 00:03:44.258 ************************************ 00:03:44.258 02:28:30 env -- common/autotest_common.sh@1142 -- # return 0 00:03:44.258 02:28:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.259 02:28:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.259 02:28:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.259 02:28:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.259 ************************************ 00:03:44.259 START TEST env_vtophys 00:03:44.259 ************************************ 00:03:44.259 02:28:30 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.259 EAL: lib.eal log level changed from notice to debug 00:03:44.259 EAL: Sysctl reports 10 cpus 00:03:44.259 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 1 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 2 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 3 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 4 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 5 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 6 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 7 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 8 as core 0 on socket 0 00:03:44.259 EAL: Detected lcore 9 as core 0 on socket 0 00:03:44.259 EAL: Maximum logical cores by configuration: 128 00:03:44.259 EAL: Detected CPU lcores: 10 00:03:44.259 EAL: Detected NUMA nodes: 1 00:03:44.259 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.259 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:44.259 EAL: Checking presence of .so 'librte_eal.so' 00:03:44.259 EAL: Detected static linkage of DPDK 00:03:44.259 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.259 EAL: PCI scan found 10 devices 00:03:44.259 EAL: Specific IOVA mode is not requested, autodetecting 00:03:44.259 EAL: Selecting IOVA mode according to bus requests 00:03:44.259 EAL: Bus pci wants IOVA as 'PA' 00:03:44.259 EAL: Selected IOVA mode 'PA' 00:03:44.259 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:44.259 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.259 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x100052c000) not respected! 00:03:44.259 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.259 EAL: Virtual area found at 0x100052c000 (size = 0x2e000) 00:03:44.259 EAL: Setting up physically contiguous memory... 00:03:44.259 EAL: Ask a virtual area of 0x1000 bytes 00:03:44.259 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x10013bc000) not respected! 00:03:44.259 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.259 EAL: Virtual area found at 0x10013bc000 (size = 0x1000) 00:03:44.259 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:44.259 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:44.259 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:44.259 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.259 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:44.259 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:44.259 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x110000000, len 268435456 00:03:44.259 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x120000000, len 268435456 00:03:44.519 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x130000000, len 268435456 00:03:44.519 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x140000000, len 268435456 00:03:44.519 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x150000000, len 268435456 00:03:44.519 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x220000000, len 268435456 00:03:44.519 EAL: Mapped memory segment 6 @ 0x10e0000000: physaddr:0x250000000, len 268435456 00:03:44.785 EAL: Mapped memory segment 7 @ 0x10b0000000: physaddr:0x260000000, len 268435456 00:03:44.785 EAL: No shared files mode enabled, IPC is disabled 00:03:44.785 EAL: Added 1792M to heap on socket 0 00:03:44.785 EAL: Added 256M to heap on socket 0 00:03:44.785 EAL: TSC is not safe to use in SMP mode 00:03:44.785 EAL: TSC is not invariant 00:03:44.785 EAL: TSC frequency is ~2294609 KHz 00:03:44.785 EAL: Main lcore 0 is ready (tid=3879f7612000;cpuset=[0]) 00:03:44.785 EAL: PCI scan found 10 devices 00:03:44.785 EAL: Registering mem event callbacks not supported 00:03:44.785 00:03:44.785 00:03:44.785 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.785 http://cunit.sourceforge.net/ 00:03:44.785 00:03:44.785 00:03:44.785 Suite: components_suite 00:03:44.785 Test: vtophys_malloc_test ...passed 00:03:45.053 Test: vtophys_spdk_malloc_test ...passed 00:03:45.053 00:03:45.053 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.053 suites 1 1 n/a 0 0 00:03:45.053 tests 2 2 2 0 0 00:03:45.053 asserts 514 514 514 0 n/a 00:03:45.053 00:03:45.053 Elapsed time = 0.297 seconds 00:03:45.053 00:03:45.053 real 0m0.780s 00:03:45.053 user 0m0.307s 00:03:45.053 sys 0m0.470s 00:03:45.053 02:28:31 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.053 02:28:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.053 ************************************ 00:03:45.053 END TEST env_vtophys 00:03:45.053 ************************************ 00:03:45.053 02:28:31 env -- common/autotest_common.sh@1142 -- # return 0 00:03:45.053 02:28:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.053 02:28:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.053 02:28:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.053 02:28:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.053 ************************************ 00:03:45.053 START TEST env_pci 00:03:45.053 ************************************ 00:03:45.053 02:28:31 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.053 00:03:45.053 00:03:45.053 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.053 http://cunit.sourceforge.net/ 00:03:45.053 00:03:45.053 00:03:45.053 Suite: pci 00:03:45.053 Test: pci_hook ...passed 00:03:45.054 00:03:45.054 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.054 suites 1 1 n/a 0 0 00:03:45.054 tests 1 1 1 0 0 00:03:45.054 asserts 25 25 25 0 n/a 00:03:45.054 00:03:45.054 Elapsed time = 0.008 seconds 00:03:45.054 EAL: Cannot find device (10000:00:01.0) 00:03:45.054 EAL: Failed to attach device on primary process 00:03:45.054 00:03:45.054 real 0m0.012s 00:03:45.054 user 0m0.009s 00:03:45.054 sys 0m0.005s 00:03:45.054 02:28:31 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.054 02:28:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.054 ************************************ 00:03:45.054 END TEST env_pci 00:03:45.054 ************************************ 00:03:45.054 02:28:31 env -- common/autotest_common.sh@1142 -- # return 0 00:03:45.054 02:28:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.054 02:28:31 env -- env/env.sh@15 -- # uname 00:03:45.054 02:28:31 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:45.054 02:28:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:45.054 02:28:31 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:03:45.054 02:28:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.054 02:28:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.054 ************************************ 00:03:45.054 START TEST env_dpdk_post_init 00:03:45.054 ************************************ 00:03:45.054 02:28:31 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:45.054 EAL: Sysctl reports 10 cpus 00:03:45.054 EAL: Detected CPU lcores: 10 00:03:45.054 EAL: Detected NUMA nodes: 1 00:03:45.054 EAL: Detected static linkage of DPDK 00:03:45.054 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.054 EAL: Selected IOVA mode 'PA' 00:03:45.054 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:45.313 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x110000000, len 268435456 00:03:45.313 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x120000000, len 268435456 00:03:45.313 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x130000000, len 268435456 00:03:45.313 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x140000000, len 268435456 00:03:45.313 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x150000000, len 268435456 00:03:45.572 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x220000000, len 268435456 00:03:45.572 EAL: Mapped memory segment 6 @ 0x10e0000000: physaddr:0x250000000, len 268435456 00:03:45.572 EAL: Mapped memory segment 7 @ 0x10b0000000: physaddr:0x260000000, len 268435456 00:03:45.572 EAL: TSC is not safe to use in SMP mode 00:03:45.572 EAL: TSC is not invariant 00:03:45.572 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.572 [2024-07-25 02:28:32.330984] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:45.573 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:45.573 Starting DPDK initialization... 00:03:45.573 Starting SPDK post initialization... 00:03:45.573 SPDK NVMe probe 00:03:45.573 Attaching to 0000:00:10.0 00:03:45.573 Attached to 0000:00:10.0 00:03:45.573 Cleaning up... 00:03:45.573 00:03:45.573 real 0m0.481s 00:03:45.573 user 0m0.006s 00:03:45.573 sys 0m0.485s 00:03:45.573 02:28:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.573 02:28:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.573 ************************************ 00:03:45.573 END TEST env_dpdk_post_init 00:03:45.573 ************************************ 00:03:45.573 02:28:32 env -- common/autotest_common.sh@1142 -- # return 0 00:03:45.573 02:28:32 env -- env/env.sh@26 -- # uname 00:03:45.573 02:28:32 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:45.573 00:03:45.573 real 0m1.741s 00:03:45.573 user 0m0.549s 00:03:45.573 sys 0m1.228s 00:03:45.573 02:28:32 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.573 02:28:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.573 ************************************ 00:03:45.573 END TEST env 00:03:45.573 ************************************ 00:03:45.831 02:28:32 -- common/autotest_common.sh@1142 -- # return 0 00:03:45.831 02:28:32 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.831 02:28:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.831 02:28:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.831 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:03:45.831 ************************************ 00:03:45.831 START TEST rpc 00:03:45.831 ************************************ 00:03:45.831 02:28:32 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.831 * Looking for test storage... 00:03:45.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:45.831 02:28:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:45.831 02:28:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45548 00:03:45.832 02:28:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.832 02:28:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45548 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@829 -- # '[' -z 45548 ']' 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:45.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:45.832 02:28:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:45.832 [2024-07-25 02:28:32.678715] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:03:45.832 [2024-07-25 02:28:32.678948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:46.399 EAL: TSC is not safe to use in SMP mode 00:03:46.399 EAL: TSC is not invariant 00:03:46.399 [2024-07-25 02:28:33.106962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.399 [2024-07-25 02:28:33.197403] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:46.399 [2024-07-25 02:28:33.199131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.399 [2024-07-25 02:28:33.199151] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45548' to capture a snapshot of events at runtime. 00:03:46.399 [2024-07-25 02:28:33.199181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.966 02:28:33 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:46.966 02:28:33 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:46.966 02:28:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.966 02:28:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.966 02:28:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:46.966 02:28:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:46.966 02:28:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.966 02:28:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.966 02:28:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.966 ************************************ 00:03:46.966 START TEST rpc_integrity 00:03:46.966 ************************************ 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.966 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.966 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.966 { 00:03:46.966 "name": "Malloc0", 00:03:46.966 "aliases": [ 00:03:46.966 "96d5d809-4a2d-11ef-9c8e-7947904e2597" 00:03:46.966 ], 00:03:46.966 "product_name": "Malloc disk", 00:03:46.966 "block_size": 512, 00:03:46.966 "num_blocks": 16384, 00:03:46.966 "uuid": "96d5d809-4a2d-11ef-9c8e-7947904e2597", 00:03:46.966 "assigned_rate_limits": { 00:03:46.966 "rw_ios_per_sec": 0, 00:03:46.966 "rw_mbytes_per_sec": 0, 00:03:46.966 "r_mbytes_per_sec": 0, 00:03:46.966 "w_mbytes_per_sec": 0 00:03:46.966 }, 00:03:46.966 "claimed": false, 00:03:46.966 "zoned": false, 00:03:46.966 "supported_io_types": { 00:03:46.966 "read": true, 00:03:46.966 "write": true, 00:03:46.966 "unmap": true, 00:03:46.966 "flush": true, 00:03:46.966 "reset": true, 00:03:46.966 "nvme_admin": false, 00:03:46.966 "nvme_io": false, 00:03:46.966 "nvme_io_md": false, 00:03:46.966 "write_zeroes": true, 00:03:46.966 "zcopy": true, 00:03:46.966 "get_zone_info": false, 00:03:46.966 "zone_management": false, 00:03:46.966 "zone_append": false, 00:03:46.966 "compare": false, 00:03:46.966 "compare_and_write": false, 00:03:46.966 "abort": true, 00:03:46.966 "seek_hole": false, 00:03:46.966 "seek_data": false, 00:03:46.966 "copy": true, 00:03:46.967 "nvme_iov_md": false 00:03:46.967 }, 00:03:46.967 "memory_domains": [ 00:03:46.967 { 00:03:46.967 "dma_device_id": "system", 00:03:46.967 "dma_device_type": 1 00:03:46.967 }, 00:03:46.967 { 00:03:46.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.967 "dma_device_type": 2 00:03:46.967 } 00:03:46.967 ], 00:03:46.967 "driver_specific": {} 00:03:46.967 } 00:03:46.967 ]' 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 [2024-07-25 02:28:33.679522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.967 [2024-07-25 02:28:33.679559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.967 [2024-07-25 02:28:33.680082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b4541c37a00 00:03:46.967 [2024-07-25 02:28:33.680103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.967 [2024-07-25 02:28:33.680697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.967 [2024-07-25 02:28:33.680724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.967 Passthru0 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.967 { 00:03:46.967 "name": "Malloc0", 00:03:46.967 "aliases": [ 00:03:46.967 "96d5d809-4a2d-11ef-9c8e-7947904e2597" 00:03:46.967 ], 00:03:46.967 "product_name": "Malloc disk", 00:03:46.967 "block_size": 512, 00:03:46.967 "num_blocks": 16384, 00:03:46.967 "uuid": "96d5d809-4a2d-11ef-9c8e-7947904e2597", 00:03:46.967 "assigned_rate_limits": { 00:03:46.967 "rw_ios_per_sec": 0, 00:03:46.967 "rw_mbytes_per_sec": 0, 00:03:46.967 "r_mbytes_per_sec": 0, 00:03:46.967 "w_mbytes_per_sec": 0 00:03:46.967 }, 00:03:46.967 "claimed": true, 00:03:46.967 "claim_type": "exclusive_write", 00:03:46.967 "zoned": false, 00:03:46.967 "supported_io_types": { 00:03:46.967 "read": true, 00:03:46.967 "write": true, 00:03:46.967 "unmap": true, 00:03:46.967 "flush": true, 00:03:46.967 "reset": true, 00:03:46.967 "nvme_admin": false, 00:03:46.967 "nvme_io": false, 00:03:46.967 "nvme_io_md": false, 00:03:46.967 "write_zeroes": true, 00:03:46.967 "zcopy": true, 00:03:46.967 "get_zone_info": false, 00:03:46.967 "zone_management": false, 00:03:46.967 "zone_append": false, 00:03:46.967 "compare": false, 00:03:46.967 "compare_and_write": false, 00:03:46.967 "abort": true, 00:03:46.967 "seek_hole": false, 00:03:46.967 "seek_data": false, 00:03:46.967 "copy": true, 00:03:46.967 "nvme_iov_md": false 00:03:46.967 }, 00:03:46.967 "memory_domains": [ 00:03:46.967 { 00:03:46.967 "dma_device_id": "system", 00:03:46.967 "dma_device_type": 1 00:03:46.967 }, 00:03:46.967 { 00:03:46.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.967 "dma_device_type": 2 00:03:46.967 } 00:03:46.967 ], 00:03:46.967 "driver_specific": {} 00:03:46.967 }, 00:03:46.967 { 00:03:46.967 "name": "Passthru0", 00:03:46.967 "aliases": [ 00:03:46.967 "a23bcf11-e100-8859-85ce-26342ba32c22" 00:03:46.967 ], 00:03:46.967 "product_name": "passthru", 00:03:46.967 "block_size": 512, 00:03:46.967 "num_blocks": 16384, 00:03:46.967 "uuid": "a23bcf11-e100-8859-85ce-26342ba32c22", 00:03:46.967 "assigned_rate_limits": { 00:03:46.967 "rw_ios_per_sec": 0, 00:03:46.967 "rw_mbytes_per_sec": 0, 00:03:46.967 "r_mbytes_per_sec": 0, 00:03:46.967 "w_mbytes_per_sec": 0 00:03:46.967 }, 00:03:46.967 "claimed": false, 00:03:46.967 "zoned": false, 00:03:46.967 "supported_io_types": { 00:03:46.967 "read": true, 00:03:46.967 "write": true, 00:03:46.967 "unmap": true, 00:03:46.967 "flush": true, 00:03:46.967 "reset": true, 00:03:46.967 "nvme_admin": false, 00:03:46.967 "nvme_io": false, 00:03:46.967 "nvme_io_md": false, 00:03:46.967 "write_zeroes": true, 00:03:46.967 "zcopy": true, 00:03:46.967 "get_zone_info": false, 00:03:46.967 "zone_management": false, 00:03:46.967 "zone_append": false, 00:03:46.967 "compare": false, 00:03:46.967 "compare_and_write": false, 00:03:46.967 "abort": true, 00:03:46.967 "seek_hole": false, 00:03:46.967 "seek_data": false, 00:03:46.967 "copy": true, 00:03:46.967 "nvme_iov_md": false 00:03:46.967 }, 00:03:46.967 "memory_domains": [ 00:03:46.967 { 00:03:46.967 "dma_device_id": "system", 00:03:46.967 "dma_device_type": 1 00:03:46.967 }, 00:03:46.967 { 00:03:46.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.967 "dma_device_type": 2 00:03:46.967 } 00:03:46.967 ], 00:03:46.967 "driver_specific": { 00:03:46.967 "passthru": { 00:03:46.967 "name": "Passthru0", 00:03:46.967 "base_bdev_name": "Malloc0" 00:03:46.967 } 00:03:46.967 } 00:03:46.967 } 00:03:46.967 ]' 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:46.967 02:28:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.967 00:03:46.967 real 0m0.167s 00:03:46.967 user 0m0.067s 00:03:46.967 sys 0m0.037s 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 ************************************ 00:03:46.967 END TEST rpc_integrity 00:03:46.967 ************************************ 00:03:46.967 02:28:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:46.967 02:28:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:46.967 02:28:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.967 02:28:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.967 02:28:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 ************************************ 00:03:46.967 START TEST rpc_plugins 00:03:46.967 ************************************ 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:46.967 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:46.967 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:46.967 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:46.967 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.228 { 00:03:47.228 "name": "Malloc1", 00:03:47.228 "aliases": [ 00:03:47.228 "96f45c2b-4a2d-11ef-9c8e-7947904e2597" 00:03:47.228 ], 00:03:47.228 "product_name": "Malloc disk", 00:03:47.228 "block_size": 4096, 00:03:47.228 "num_blocks": 256, 00:03:47.228 "uuid": "96f45c2b-4a2d-11ef-9c8e-7947904e2597", 00:03:47.228 "assigned_rate_limits": { 00:03:47.228 "rw_ios_per_sec": 0, 00:03:47.228 "rw_mbytes_per_sec": 0, 00:03:47.228 "r_mbytes_per_sec": 0, 00:03:47.228 "w_mbytes_per_sec": 0 00:03:47.228 }, 00:03:47.228 "claimed": false, 00:03:47.228 "zoned": false, 00:03:47.228 "supported_io_types": { 00:03:47.228 "read": true, 00:03:47.228 "write": true, 00:03:47.228 "unmap": true, 00:03:47.228 "flush": true, 00:03:47.228 "reset": true, 00:03:47.228 "nvme_admin": false, 00:03:47.228 "nvme_io": false, 00:03:47.228 "nvme_io_md": false, 00:03:47.228 "write_zeroes": true, 00:03:47.228 "zcopy": true, 00:03:47.228 "get_zone_info": false, 00:03:47.228 "zone_management": false, 00:03:47.228 "zone_append": false, 00:03:47.228 "compare": false, 00:03:47.228 "compare_and_write": false, 00:03:47.228 "abort": true, 00:03:47.228 "seek_hole": false, 00:03:47.228 "seek_data": false, 00:03:47.228 "copy": true, 00:03:47.228 "nvme_iov_md": false 00:03:47.228 }, 00:03:47.228 "memory_domains": [ 00:03:47.228 { 00:03:47.228 "dma_device_id": "system", 00:03:47.228 "dma_device_type": 1 00:03:47.228 }, 00:03:47.228 { 00:03:47.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.228 "dma_device_type": 2 00:03:47.228 } 00:03:47.228 ], 00:03:47.228 "driver_specific": {} 00:03:47.228 } 00:03:47.228 ]' 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.228 02:28:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.228 00:03:47.228 real 0m0.088s 00:03:47.228 user 0m0.034s 00:03:47.228 sys 0m0.016s 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.228 02:28:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 ************************************ 00:03:47.228 END TEST rpc_plugins 00:03:47.228 ************************************ 00:03:47.228 02:28:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:47.228 02:28:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.228 02:28:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.228 02:28:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.228 02:28:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 ************************************ 00:03:47.228 START TEST rpc_trace_cmd_test 00:03:47.228 ************************************ 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:47.228 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45548", 00:03:47.228 "tpoint_group_mask": "0x8", 00:03:47.228 "iscsi_conn": { 00:03:47.228 "mask": "0x2", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "scsi": { 00:03:47.228 "mask": "0x4", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "bdev": { 00:03:47.228 "mask": "0x8", 00:03:47.228 "tpoint_mask": "0xffffffffffffffff" 00:03:47.228 }, 00:03:47.228 "nvmf_rdma": { 00:03:47.228 "mask": "0x10", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "nvmf_tcp": { 00:03:47.228 "mask": "0x20", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "blobfs": { 00:03:47.228 "mask": "0x80", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "dsa": { 00:03:47.228 "mask": "0x200", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "thread": { 00:03:47.228 "mask": "0x400", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "nvme_pcie": { 00:03:47.228 "mask": "0x800", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "iaa": { 00:03:47.228 "mask": "0x1000", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "nvme_tcp": { 00:03:47.228 "mask": "0x2000", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "bdev_nvme": { 00:03:47.228 "mask": "0x4000", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 }, 00:03:47.228 "sock": { 00:03:47.228 "mask": "0x8000", 00:03:47.228 "tpoint_mask": "0x0" 00:03:47.228 } 00:03:47.228 }' 00:03:47.228 02:28:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.228 00:03:47.228 real 0m0.080s 00:03:47.228 user 0m0.041s 00:03:47.228 sys 0m0.032s 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.228 02:28:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 ************************************ 00:03:47.228 END TEST rpc_trace_cmd_test 00:03:47.228 ************************************ 00:03:47.228 02:28:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:47.228 02:28:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.228 02:28:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.228 02:28:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.228 02:28:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.228 02:28:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.228 02:28:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 ************************************ 00:03:47.228 START TEST rpc_daemon_integrity 00:03:47.228 ************************************ 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.228 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.489 { 00:03:47.489 "name": "Malloc2", 00:03:47.489 "aliases": [ 00:03:47.489 "97218752-4a2d-11ef-9c8e-7947904e2597" 00:03:47.489 ], 00:03:47.489 "product_name": "Malloc disk", 00:03:47.489 "block_size": 512, 00:03:47.489 "num_blocks": 16384, 00:03:47.489 "uuid": "97218752-4a2d-11ef-9c8e-7947904e2597", 00:03:47.489 "assigned_rate_limits": { 00:03:47.489 "rw_ios_per_sec": 0, 00:03:47.489 "rw_mbytes_per_sec": 0, 00:03:47.489 "r_mbytes_per_sec": 0, 00:03:47.489 "w_mbytes_per_sec": 0 00:03:47.489 }, 00:03:47.489 "claimed": false, 00:03:47.489 "zoned": false, 00:03:47.489 "supported_io_types": { 00:03:47.489 "read": true, 00:03:47.489 "write": true, 00:03:47.489 "unmap": true, 00:03:47.489 "flush": true, 00:03:47.489 "reset": true, 00:03:47.489 "nvme_admin": false, 00:03:47.489 "nvme_io": false, 00:03:47.489 "nvme_io_md": false, 00:03:47.489 "write_zeroes": true, 00:03:47.489 "zcopy": true, 00:03:47.489 "get_zone_info": false, 00:03:47.489 "zone_management": false, 00:03:47.489 "zone_append": false, 00:03:47.489 "compare": false, 00:03:47.489 "compare_and_write": false, 00:03:47.489 "abort": true, 00:03:47.489 "seek_hole": false, 00:03:47.489 "seek_data": false, 00:03:47.489 "copy": true, 00:03:47.489 "nvme_iov_md": false 00:03:47.489 }, 00:03:47.489 "memory_domains": [ 00:03:47.489 { 00:03:47.489 "dma_device_id": "system", 00:03:47.489 "dma_device_type": 1 00:03:47.489 }, 00:03:47.489 { 00:03:47.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.489 "dma_device_type": 2 00:03:47.489 } 00:03:47.489 ], 00:03:47.489 "driver_specific": {} 00:03:47.489 } 00:03:47.489 ]' 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.489 [2024-07-25 02:28:34.179542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:47.489 [2024-07-25 02:28:34.179581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.489 [2024-07-25 02:28:34.179605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b4541c37a00 00:03:47.489 [2024-07-25 02:28:34.179612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.489 [2024-07-25 02:28:34.180036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.489 [2024-07-25 02:28:34.180060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.489 Passthru0 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.489 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.489 { 00:03:47.489 "name": "Malloc2", 00:03:47.489 "aliases": [ 00:03:47.489 "97218752-4a2d-11ef-9c8e-7947904e2597" 00:03:47.489 ], 00:03:47.489 "product_name": "Malloc disk", 00:03:47.489 "block_size": 512, 00:03:47.489 "num_blocks": 16384, 00:03:47.489 "uuid": "97218752-4a2d-11ef-9c8e-7947904e2597", 00:03:47.489 "assigned_rate_limits": { 00:03:47.489 "rw_ios_per_sec": 0, 00:03:47.489 "rw_mbytes_per_sec": 0, 00:03:47.489 "r_mbytes_per_sec": 0, 00:03:47.489 "w_mbytes_per_sec": 0 00:03:47.489 }, 00:03:47.489 "claimed": true, 00:03:47.489 "claim_type": "exclusive_write", 00:03:47.489 "zoned": false, 00:03:47.489 "supported_io_types": { 00:03:47.489 "read": true, 00:03:47.489 "write": true, 00:03:47.489 "unmap": true, 00:03:47.489 "flush": true, 00:03:47.489 "reset": true, 00:03:47.489 "nvme_admin": false, 00:03:47.489 "nvme_io": false, 00:03:47.489 "nvme_io_md": false, 00:03:47.489 "write_zeroes": true, 00:03:47.489 "zcopy": true, 00:03:47.489 "get_zone_info": false, 00:03:47.489 "zone_management": false, 00:03:47.489 "zone_append": false, 00:03:47.489 "compare": false, 00:03:47.489 "compare_and_write": false, 00:03:47.489 "abort": true, 00:03:47.489 "seek_hole": false, 00:03:47.489 "seek_data": false, 00:03:47.489 "copy": true, 00:03:47.489 "nvme_iov_md": false 00:03:47.489 }, 00:03:47.489 "memory_domains": [ 00:03:47.489 { 00:03:47.489 "dma_device_id": "system", 00:03:47.489 "dma_device_type": 1 00:03:47.489 }, 00:03:47.489 { 00:03:47.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.489 "dma_device_type": 2 00:03:47.489 } 00:03:47.489 ], 00:03:47.489 "driver_specific": {} 00:03:47.489 }, 00:03:47.489 { 00:03:47.489 "name": "Passthru0", 00:03:47.489 "aliases": [ 00:03:47.489 "f1cbb5e6-2eac-015e-a5d2-37a01f05254b" 00:03:47.489 ], 00:03:47.489 "product_name": "passthru", 00:03:47.489 "block_size": 512, 00:03:47.489 "num_blocks": 16384, 00:03:47.489 "uuid": "f1cbb5e6-2eac-015e-a5d2-37a01f05254b", 00:03:47.489 "assigned_rate_limits": { 00:03:47.489 "rw_ios_per_sec": 0, 00:03:47.489 "rw_mbytes_per_sec": 0, 00:03:47.489 "r_mbytes_per_sec": 0, 00:03:47.489 "w_mbytes_per_sec": 0 00:03:47.489 }, 00:03:47.489 "claimed": false, 00:03:47.489 "zoned": false, 00:03:47.489 "supported_io_types": { 00:03:47.489 "read": true, 00:03:47.489 "write": true, 00:03:47.489 "unmap": true, 00:03:47.489 "flush": true, 00:03:47.489 "reset": true, 00:03:47.489 "nvme_admin": false, 00:03:47.489 "nvme_io": false, 00:03:47.489 "nvme_io_md": false, 00:03:47.489 "write_zeroes": true, 00:03:47.489 "zcopy": true, 00:03:47.489 "get_zone_info": false, 00:03:47.489 "zone_management": false, 00:03:47.489 "zone_append": false, 00:03:47.489 "compare": false, 00:03:47.489 "compare_and_write": false, 00:03:47.489 "abort": true, 00:03:47.489 "seek_hole": false, 00:03:47.489 "seek_data": false, 00:03:47.489 "copy": true, 00:03:47.489 "nvme_iov_md": false 00:03:47.489 }, 00:03:47.489 "memory_domains": [ 00:03:47.489 { 00:03:47.489 "dma_device_id": "system", 00:03:47.489 "dma_device_type": 1 00:03:47.489 }, 00:03:47.489 { 00:03:47.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.490 "dma_device_type": 2 00:03:47.490 } 00:03:47.490 ], 00:03:47.490 "driver_specific": { 00:03:47.490 "passthru": { 00:03:47.490 "name": "Passthru0", 00:03:47.490 "base_bdev_name": "Malloc2" 00:03:47.490 } 00:03:47.490 } 00:03:47.490 } 00:03:47.490 ]' 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.490 00:03:47.490 real 0m0.173s 00:03:47.490 user 0m0.055s 00:03:47.490 sys 0m0.059s 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.490 02:28:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.490 ************************************ 00:03:47.490 END TEST rpc_daemon_integrity 00:03:47.490 ************************************ 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:47.490 02:28:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.490 02:28:34 rpc -- rpc/rpc.sh@84 -- # killprocess 45548 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@948 -- # '[' -z 45548 ']' 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@952 -- # kill -0 45548 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@953 -- # uname 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45548 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@956 -- # tail -1 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:47.490 killing process with pid 45548 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45548' 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@967 -- # kill 45548 00:03:47.490 02:28:34 rpc -- common/autotest_common.sh@972 -- # wait 45548 00:03:47.749 00:03:47.749 real 0m2.076s 00:03:47.749 user 0m2.188s 00:03:47.749 sys 0m0.884s 00:03:47.749 02:28:34 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.749 02:28:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.749 ************************************ 00:03:47.749 END TEST rpc 00:03:47.749 ************************************ 00:03:47.749 02:28:34 -- common/autotest_common.sh@1142 -- # return 0 00:03:47.749 02:28:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:47.749 02:28:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.749 02:28:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.749 02:28:34 -- common/autotest_common.sh@10 -- # set +x 00:03:47.749 ************************************ 00:03:47.749 START TEST skip_rpc 00:03:47.749 ************************************ 00:03:47.749 02:28:34 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:48.009 * Looking for test storage... 00:03:48.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:48.009 02:28:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:48.009 02:28:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:48.009 02:28:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:48.009 02:28:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.009 02:28:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.009 02:28:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.009 ************************************ 00:03:48.009 START TEST skip_rpc 00:03:48.009 ************************************ 00:03:48.009 02:28:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:48.009 02:28:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:48.009 02:28:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45724 00:03:48.009 02:28:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.009 02:28:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:48.009 [2024-07-25 02:28:34.812385] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:03:48.009 [2024-07-25 02:28:34.812548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:48.579 EAL: TSC is not safe to use in SMP mode 00:03:48.579 EAL: TSC is not invariant 00:03:48.579 [2024-07-25 02:28:35.230044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.579 [2024-07-25 02:28:35.321727] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:48.579 [2024-07-25 02:28:35.323499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45724 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45724 ']' 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45724 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45724 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:53.853 killing process with pid 45724 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45724' 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45724 00:03:53.853 02:28:39 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45724 00:03:53.853 00:03:53.853 real 0m5.271s 00:03:53.853 user 0m4.851s 00:03:53.853 sys 0m0.448s 00:03:53.853 02:28:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.853 02:28:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.853 ************************************ 00:03:53.853 END TEST skip_rpc 00:03:53.853 ************************************ 00:03:53.853 02:28:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:53.853 02:28:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:53.853 02:28:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.853 02:28:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.853 02:28:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.853 ************************************ 00:03:53.853 START TEST skip_rpc_with_json 00:03:53.853 ************************************ 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45769 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45769 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45769 ']' 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:53.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:53.853 02:28:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.853 [2024-07-25 02:28:40.148074] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:03:53.853 [2024-07-25 02:28:40.148416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:53.853 EAL: TSC is not safe to use in SMP mode 00:03:53.853 EAL: TSC is not invariant 00:03:53.853 [2024-07-25 02:28:40.576119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.853 [2024-07-25 02:28:40.669767] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:53.853 [2024-07-25 02:28:40.671522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.422 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:54.422 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:54.422 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:54.422 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:54.422 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.422 [2024-07-25 02:28:41.071720] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:54.422 request: 00:03:54.422 { 00:03:54.422 "trtype": "tcp", 00:03:54.422 "method": "nvmf_get_transports", 00:03:54.422 "req_id": 1 00:03:54.422 } 00:03:54.423 Got JSON-RPC error response 00:03:54.423 response: 00:03:54.423 { 00:03:54.423 "code": -19, 00:03:54.423 "message": "Operation not supported by device" 00:03:54.423 } 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.423 [2024-07-25 02:28:41.083739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:54.423 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:54.423 { 00:03:54.423 "subsystems": [ 00:03:54.423 { 00:03:54.423 "subsystem": "vmd", 00:03:54.423 "config": [] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "iobuf", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "iobuf_set_options", 00:03:54.423 "params": { 00:03:54.423 "small_pool_count": 8192, 00:03:54.423 "large_pool_count": 1024, 00:03:54.423 "small_bufsize": 8192, 00:03:54.423 "large_bufsize": 135168 00:03:54.423 } 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "scheduler", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "framework_set_scheduler", 00:03:54.423 "params": { 00:03:54.423 "name": "static" 00:03:54.423 } 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "sock", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "sock_set_default_impl", 00:03:54.423 "params": { 00:03:54.423 "impl_name": "posix" 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "sock_impl_set_options", 00:03:54.423 "params": { 00:03:54.423 "impl_name": "ssl", 00:03:54.423 "recv_buf_size": 4096, 00:03:54.423 "send_buf_size": 4096, 00:03:54.423 "enable_recv_pipe": true, 00:03:54.423 "enable_quickack": false, 00:03:54.423 "enable_placement_id": 0, 00:03:54.423 "enable_zerocopy_send_server": true, 00:03:54.423 "enable_zerocopy_send_client": false, 00:03:54.423 "zerocopy_threshold": 0, 00:03:54.423 "tls_version": 0, 00:03:54.423 "enable_ktls": false 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "sock_impl_set_options", 00:03:54.423 "params": { 00:03:54.423 "impl_name": "posix", 00:03:54.423 "recv_buf_size": 2097152, 00:03:54.423 "send_buf_size": 2097152, 00:03:54.423 "enable_recv_pipe": true, 00:03:54.423 "enable_quickack": false, 00:03:54.423 "enable_placement_id": 0, 00:03:54.423 "enable_zerocopy_send_server": true, 00:03:54.423 "enable_zerocopy_send_client": false, 00:03:54.423 "zerocopy_threshold": 0, 00:03:54.423 "tls_version": 0, 00:03:54.423 "enable_ktls": false 00:03:54.423 } 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "keyring", 00:03:54.423 "config": [] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "accel", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "accel_set_options", 00:03:54.423 "params": { 00:03:54.423 "small_cache_size": 128, 00:03:54.423 "large_cache_size": 16, 00:03:54.423 "task_count": 2048, 00:03:54.423 "sequence_count": 2048, 00:03:54.423 "buf_count": 2048 00:03:54.423 } 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "bdev", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "bdev_set_options", 00:03:54.423 "params": { 00:03:54.423 "bdev_io_pool_size": 65535, 00:03:54.423 "bdev_io_cache_size": 256, 00:03:54.423 "bdev_auto_examine": true, 00:03:54.423 "iobuf_small_cache_size": 128, 00:03:54.423 "iobuf_large_cache_size": 16 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "bdev_raid_set_options", 00:03:54.423 "params": { 00:03:54.423 "process_window_size_kb": 1024, 00:03:54.423 "process_max_bandwidth_mb_sec": 0 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "bdev_nvme_set_options", 00:03:54.423 "params": { 00:03:54.423 "action_on_timeout": "none", 00:03:54.423 "timeout_us": 0, 00:03:54.423 "timeout_admin_us": 0, 00:03:54.423 "keep_alive_timeout_ms": 10000, 00:03:54.423 "arbitration_burst": 0, 00:03:54.423 "low_priority_weight": 0, 00:03:54.423 "medium_priority_weight": 0, 00:03:54.423 "high_priority_weight": 0, 00:03:54.423 "nvme_adminq_poll_period_us": 10000, 00:03:54.423 "nvme_ioq_poll_period_us": 0, 00:03:54.423 "io_queue_requests": 0, 00:03:54.423 "delay_cmd_submit": true, 00:03:54.423 "transport_retry_count": 4, 00:03:54.423 "bdev_retry_count": 3, 00:03:54.423 "transport_ack_timeout": 0, 00:03:54.423 "ctrlr_loss_timeout_sec": 0, 00:03:54.423 "reconnect_delay_sec": 0, 00:03:54.423 "fast_io_fail_timeout_sec": 0, 00:03:54.423 "disable_auto_failback": false, 00:03:54.423 "generate_uuids": false, 00:03:54.423 "transport_tos": 0, 00:03:54.423 "nvme_error_stat": false, 00:03:54.423 "rdma_srq_size": 0, 00:03:54.423 "io_path_stat": false, 00:03:54.423 "allow_accel_sequence": false, 00:03:54.423 "rdma_max_cq_size": 0, 00:03:54.423 "rdma_cm_event_timeout_ms": 0, 00:03:54.423 "dhchap_digests": [ 00:03:54.423 "sha256", 00:03:54.423 "sha384", 00:03:54.423 "sha512" 00:03:54.423 ], 00:03:54.423 "dhchap_dhgroups": [ 00:03:54.423 "null", 00:03:54.423 "ffdhe2048", 00:03:54.423 "ffdhe3072", 00:03:54.423 "ffdhe4096", 00:03:54.423 "ffdhe6144", 00:03:54.423 "ffdhe8192" 00:03:54.423 ] 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "bdev_nvme_set_hotplug", 00:03:54.423 "params": { 00:03:54.423 "period_us": 100000, 00:03:54.423 "enable": false 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "bdev_wait_for_examine" 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "scsi", 00:03:54.423 "config": null 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "nvmf", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "nvmf_set_config", 00:03:54.423 "params": { 00:03:54.423 "discovery_filter": "match_any", 00:03:54.423 "admin_cmd_passthru": { 00:03:54.423 "identify_ctrlr": false 00:03:54.423 } 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "nvmf_set_max_subsystems", 00:03:54.423 "params": { 00:03:54.423 "max_subsystems": 1024 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "nvmf_set_crdt", 00:03:54.423 "params": { 00:03:54.423 "crdt1": 0, 00:03:54.423 "crdt2": 0, 00:03:54.423 "crdt3": 0 00:03:54.423 } 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "method": "nvmf_create_transport", 00:03:54.423 "params": { 00:03:54.423 "trtype": "TCP", 00:03:54.423 "max_queue_depth": 128, 00:03:54.423 "max_io_qpairs_per_ctrlr": 127, 00:03:54.423 "in_capsule_data_size": 4096, 00:03:54.423 "max_io_size": 131072, 00:03:54.423 "io_unit_size": 131072, 00:03:54.423 "max_aq_depth": 128, 00:03:54.423 "num_shared_buffers": 511, 00:03:54.423 "buf_cache_size": 4294967295, 00:03:54.423 "dif_insert_or_strip": false, 00:03:54.423 "zcopy": false, 00:03:54.423 "c2h_success": true, 00:03:54.423 "sock_priority": 0, 00:03:54.423 "abort_timeout_sec": 1, 00:03:54.423 "ack_timeout": 0, 00:03:54.423 "data_wr_pool_size": 0 00:03:54.423 } 00:03:54.423 } 00:03:54.423 ] 00:03:54.423 }, 00:03:54.423 { 00:03:54.423 "subsystem": "iscsi", 00:03:54.423 "config": [ 00:03:54.423 { 00:03:54.423 "method": "iscsi_set_options", 00:03:54.423 "params": { 00:03:54.423 "node_base": "iqn.2016-06.io.spdk", 00:03:54.423 "max_sessions": 128, 00:03:54.423 "max_connections_per_session": 2, 00:03:54.423 "max_queue_depth": 64, 00:03:54.423 "default_time2wait": 2, 00:03:54.423 "default_time2retain": 20, 00:03:54.423 "first_burst_length": 8192, 00:03:54.423 "immediate_data": true, 00:03:54.423 "allow_duplicated_isid": false, 00:03:54.423 "error_recovery_level": 0, 00:03:54.423 "nop_timeout": 60, 00:03:54.423 "nop_in_interval": 30, 00:03:54.423 "disable_chap": false, 00:03:54.423 "require_chap": false, 00:03:54.423 "mutual_chap": false, 00:03:54.423 "chap_group": 0, 00:03:54.423 "max_large_datain_per_connection": 64, 00:03:54.423 "max_r2t_per_connection": 4, 00:03:54.423 "pdu_pool_size": 36864, 00:03:54.424 "immediate_data_pool_size": 16384, 00:03:54.424 "data_out_pool_size": 2048 00:03:54.424 } 00:03:54.424 } 00:03:54.424 ] 00:03:54.424 } 00:03:54.424 ] 00:03:54.424 } 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45769 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45769 ']' 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45769 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45769 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:54.424 killing process with pid 45769 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45769' 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45769 00:03:54.424 02:28:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45769 00:03:54.683 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:54.683 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45783 00:03:54.683 02:28:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45783 ']' 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:59.975 killing process with pid 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45783' 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45783 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:59.975 00:03:59.975 real 0m6.640s 00:03:59.975 user 0m6.107s 00:03:59.975 sys 0m1.024s 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:59.975 ************************************ 00:03:59.975 END TEST skip_rpc_with_json 00:03:59.975 ************************************ 00:03:59.975 02:28:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:59.975 02:28:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:59.975 02:28:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.975 02:28:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.975 02:28:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.975 ************************************ 00:03:59.975 START TEST skip_rpc_with_delay 00:03:59.975 ************************************ 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.975 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:59.976 [2024-07-25 02:28:46.849143] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:59.976 [2024-07-25 02:28:46.849327] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:59.976 00:03:59.976 real 0m0.014s 00:03:59.976 user 0m0.004s 00:03:59.976 sys 0m0.011s 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.976 02:28:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:59.976 ************************************ 00:03:59.976 END TEST skip_rpc_with_delay 00:03:59.976 ************************************ 00:04:00.235 02:28:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:00.235 02:28:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:00.235 02:28:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:00.235 02:28:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:00.235 00:04:00.235 real 0m12.290s 00:04:00.235 user 0m11.139s 00:04:00.235 sys 0m1.706s 00:04:00.235 02:28:46 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.235 02:28:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.235 ************************************ 00:04:00.235 END TEST skip_rpc 00:04:00.235 ************************************ 00:04:00.235 02:28:46 -- common/autotest_common.sh@1142 -- # return 0 00:04:00.235 02:28:46 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:00.235 02:28:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.235 02:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.235 02:28:46 -- common/autotest_common.sh@10 -- # set +x 00:04:00.235 ************************************ 00:04:00.235 START TEST rpc_client 00:04:00.235 ************************************ 00:04:00.235 02:28:46 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:00.235 * Looking for test storage... 00:04:00.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:00.495 02:28:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:00.495 OK 00:04:00.495 02:28:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:00.495 00:04:00.495 real 0m0.184s 00:04:00.495 user 0m0.098s 00:04:00.495 sys 0m0.138s 00:04:00.495 02:28:47 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.495 02:28:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:00.495 ************************************ 00:04:00.495 END TEST rpc_client 00:04:00.495 ************************************ 00:04:00.495 02:28:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:00.495 02:28:47 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:00.495 02:28:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.495 02:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.495 02:28:47 -- common/autotest_common.sh@10 -- # set +x 00:04:00.495 ************************************ 00:04:00.495 START TEST json_config 00:04:00.495 ************************************ 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:00.495 02:28:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:00.495 02:28:47 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:00.495 02:28:47 json_config -- nvmf/common.sh@7 -- # return 0 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:00.495 INFO: JSON configuration test init 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.495 02:28:47 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:00.495 02:28:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:00.495 02:28:47 json_config -- json_config/common.sh@10 -- # shift 00:04:00.495 02:28:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:00.495 02:28:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:00.495 02:28:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:00.495 02:28:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.495 02:28:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:00.495 Waiting for target to run... 00:04:00.495 02:28:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45942 00:04:00.495 02:28:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:00.495 02:28:47 json_config -- json_config/common.sh@25 -- # waitforlisten 45942 /var/tmp/spdk_tgt.sock 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 45942 ']' 00:04:00.495 02:28:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:00.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:00.495 02:28:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.495 [2024-07-25 02:28:47.380991] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:00.496 [2024-07-25 02:28:47.381345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:00.754 EAL: TSC is not safe to use in SMP mode 00:04:00.754 EAL: TSC is not invariant 00:04:00.754 [2024-07-25 02:28:47.600558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.012 [2024-07-25 02:28:47.678990] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:01.012 [2024-07-25 02:28:47.680792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:01.579 00:04:01.579 02:28:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.579 02:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:01.579 02:28:48 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:01.579 02:28:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:01.839 [2024-07-25 02:28:48.569827] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:01.839 02:28:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.839 02:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:01.839 02:28:48 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:01.839 02:28:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@51 -- # sort 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:02.099 02:28:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:02.099 02:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:04:02.099 02:28:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:02.099 02:28:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:04:02.099 02:28:48 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:02.099 02:28:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:04:02.359 02:28:49 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:02.359 02:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:02.618 Nvme0n1p0 Nvme0n1p1 00:04:02.618 02:28:49 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:02.618 02:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:02.618 [2024-07-25 02:28:49.471412] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:02.618 [2024-07-25 02:28:49.471455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:02.618 00:04:02.618 02:28:49 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:02.618 02:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:02.877 Malloc3 00:04:02.877 02:28:49 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:02.877 02:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:03.137 [2024-07-25 02:28:49.847423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:03.137 [2024-07-25 02:28:49.847462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.137 [2024-07-25 02:28:49.847487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb5fef038180 00:04:03.137 [2024-07-25 02:28:49.847492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.137 [2024-07-25 02:28:49.847939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.137 [2024-07-25 02:28:49.847966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:03.137 PTBdevFromMalloc3 00:04:03.137 02:28:49 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:03.137 02:28:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:03.396 Null0 00:04:03.396 02:28:50 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:03.396 02:28:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:03.396 Malloc0 00:04:03.396 02:28:50 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:03.396 02:28:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:03.654 Malloc1 00:04:03.654 02:28:50 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:03.654 02:28:50 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:03.913 102400+0 records in 00:04:03.913 102400+0 records out 00:04:03.913 104857600 bytes transferred in 0.329367 secs (318361005 bytes/sec) 00:04:03.913 02:28:50 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:03.913 02:28:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:04.178 aio_disk 00:04:04.178 02:28:50 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:04.178 02:28:50 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:04.178 02:28:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:04.438 a143838b-4a2d-11ef-9c8e-7947904e2597 00:04:04.438 02:28:51 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:04.438 02:28:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:04.438 02:28:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:04.718 02:28:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:04.718 02:28:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:04.718 02:28:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:04.718 02:28:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:04.977 02:28:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@161 -- # [[ 0 -eq 1 ]] 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@75 -- # sort 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@76 -- # sort 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:04:04.977 02:28:51 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:04.977 02:28:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:04:05.236 02:28:52 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\6\1\6\c\1\9\-\4\a\2\d\-\1\1\e\f\-\9\c\8\e\-\7\9\4\7\9\0\4\e\2\5\9\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\7\b\a\b\0\0\-\4\a\2\d\-\1\1\e\f\-\9\c\8\e\-\7\9\4\7\9\0\4\e\2\5\9\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\9\5\e\9\d\c\-\4\a\2\d\-\1\1\e\f\-\9\c\8\e\-\7\9\4\7\9\0\4\e\2\5\9\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\1\a\f\8\c\a\2\-\4\a\2\d\-\1\1\e\f\-\9\c\8\e\-\7\9\4\7\9\0\4\e\2\5\9\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@90 -- # cat 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 bdev_register:aio_disk 00:04:05.237 Expected events matched: 00:04:05.237 bdev_register:Malloc0 00:04:05.237 bdev_register:Malloc0p0 00:04:05.237 bdev_register:Malloc0p1 00:04:05.237 bdev_register:Malloc0p2 00:04:05.237 bdev_register:Malloc1 00:04:05.237 bdev_register:Malloc3 00:04:05.237 bdev_register:Null0 00:04:05.237 bdev_register:Nvme0n1 00:04:05.237 bdev_register:Nvme0n1p0 00:04:05.237 bdev_register:Nvme0n1p1 00:04:05.237 bdev_register:PTBdevFromMalloc3 00:04:05.237 bdev_register:a1616c19-4a2d-11ef-9c8e-7947904e2597 00:04:05.237 bdev_register:a17bab00-4a2d-11ef-9c8e-7947904e2597 00:04:05.237 bdev_register:a195e9dc-4a2d-11ef-9c8e-7947904e2597 00:04:05.237 bdev_register:a1af8ca2-4a2d-11ef-9c8e-7947904e2597 00:04:05.237 bdev_register:aio_disk 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:04:05.237 02:28:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.237 02:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:04:05.237 02:28:52 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:05.237 02:28:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.237 02:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.496 02:28:52 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:05.496 02:28:52 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:05.496 02:28:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:05.496 MallocBdevForConfigChangeCheck 00:04:05.496 02:28:52 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:05.496 02:28:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.496 02:28:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.496 02:28:52 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:05.496 02:28:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.064 INFO: shutting down applications... 00:04:06.064 02:28:52 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:06.064 02:28:52 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:06.064 02:28:52 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:06.064 02:28:52 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:06.064 02:28:52 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:06.064 [2024-07-25 02:28:52.823549] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:06.323 Calling clear_iscsi_subsystem 00:04:06.323 Calling clear_nvmf_subsystem 00:04:06.323 Calling clear_bdev_subsystem 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:06.323 02:28:52 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:06.582 02:28:53 json_config -- json_config/json_config.sh@349 -- # break 00:04:06.582 02:28:53 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:06.582 02:28:53 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:06.582 02:28:53 json_config -- json_config/common.sh@31 -- # local app=target 00:04:06.582 02:28:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.582 02:28:53 json_config -- json_config/common.sh@35 -- # [[ -n 45942 ]] 00:04:06.582 02:28:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45942 00:04:06.582 02:28:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.582 02:28:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.582 02:28:53 json_config -- json_config/common.sh@41 -- # kill -0 45942 00:04:06.582 02:28:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.149 02:28:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.149 02:28:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.149 02:28:53 json_config -- json_config/common.sh@41 -- # kill -0 45942 00:04:07.149 02:28:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.149 02:28:53 json_config -- json_config/common.sh@43 -- # break 00:04:07.149 02:28:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.149 SPDK target shutdown done 00:04:07.149 02:28:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.149 INFO: relaunching applications... 00:04:07.149 02:28:53 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:07.149 02:28:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.149 02:28:53 json_config -- json_config/common.sh@9 -- # local app=target 00:04:07.149 02:28:53 json_config -- json_config/common.sh@10 -- # shift 00:04:07.149 02:28:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.149 02:28:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.149 02:28:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.149 02:28:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.149 02:28:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.149 02:28:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46129 00:04:07.149 Waiting for target to run... 00:04:07.149 02:28:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.149 02:28:53 json_config -- json_config/common.sh@25 -- # waitforlisten 46129 /var/tmp/spdk_tgt.sock 00:04:07.149 02:28:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 46129 ']' 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.149 02:28:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.149 [2024-07-25 02:28:53.852999] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:07.149 [2024-07-25 02:28:53.853370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:07.407 EAL: TSC is not safe to use in SMP mode 00:04:07.407 EAL: TSC is not invariant 00:04:07.407 [2024-07-25 02:28:54.072408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.407 [2024-07-25 02:28:54.160231] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:07.407 [2024-07-25 02:28:54.162082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.407 [2024-07-25 02:28:54.300295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:07.407 [2024-07-25 02:28:54.300333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:07.666 [2024-07-25 02:28:54.308283] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:07.666 [2024-07-25 02:28:54.308298] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:07.666 [2024-07-25 02:28:54.316297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:07.666 [2024-07-25 02:28:54.316311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:07.666 [2024-07-25 02:28:54.316317] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:07.666 [2024-07-25 02:28:54.324297] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:07.666 [2024-07-25 02:28:54.392907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:07.666 [2024-07-25 02:28:54.392936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.666 [2024-07-25 02:28:54.392943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x29800f637780 00:04:07.666 [2024-07-25 02:28:54.392950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.666 [2024-07-25 02:28:54.392994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.666 [2024-07-25 02:28:54.393000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:07.925 00:04:07.925 02:28:54 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.925 02:28:54 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:07.925 02:28:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:07.925 02:28:54 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:07.925 INFO: Checking if target configuration is the same... 00:04:07.925 02:28:54 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:07.925 02:28:54 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.0QH07q /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.925 + '[' 2 -ne 2 ']' 00:04:07.925 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:07.925 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:07.925 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:07.925 +++ basename /tmp//sh-np.0QH07q 00:04:07.925 ++ mktemp /tmp/sh-np.0QH07q.XXX 00:04:07.925 + tmp_file_1=/tmp/sh-np.0QH07q.N76 00:04:07.925 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:07.925 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.925 + tmp_file_2=/tmp/spdk_tgt_config.json.Rh2 00:04:07.925 + ret=0 00:04:07.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:07.925 02:28:54 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:07.925 02:28:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.494 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:08.494 + diff -u /tmp/sh-np.0QH07q.N76 /tmp/spdk_tgt_config.json.Rh2 00:04:08.494 INFO: JSON config files are the same 00:04:08.494 + echo 'INFO: JSON config files are the same' 00:04:08.494 + rm /tmp/sh-np.0QH07q.N76 /tmp/spdk_tgt_config.json.Rh2 00:04:08.494 + exit 0 00:04:08.494 02:28:55 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:08.494 INFO: changing configuration and checking if this can be detected... 00:04:08.494 02:28:55 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:08.494 02:28:55 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.494 02:28:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:08.494 02:28:55 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.fTlaq1 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:08.494 + '[' 2 -ne 2 ']' 00:04:08.494 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:08.494 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:08.494 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:08.494 +++ basename /tmp//sh-np.fTlaq1 00:04:08.494 ++ mktemp /tmp/sh-np.fTlaq1.XXX 00:04:08.494 + tmp_file_1=/tmp/sh-np.fTlaq1.NDH 00:04:08.494 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:08.494 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:08.494 + tmp_file_2=/tmp/spdk_tgt_config.json.l6z 00:04:08.494 + ret=0 00:04:08.494 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:08.494 02:28:55 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:08.494 02:28:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.062 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:09.062 + diff -u /tmp/sh-np.fTlaq1.NDH /tmp/spdk_tgt_config.json.l6z 00:04:09.062 + ret=1 00:04:09.062 + echo '=== Start of file: /tmp/sh-np.fTlaq1.NDH ===' 00:04:09.062 + cat /tmp/sh-np.fTlaq1.NDH 00:04:09.062 + echo '=== End of file: /tmp/sh-np.fTlaq1.NDH ===' 00:04:09.062 + echo '' 00:04:09.062 + echo '=== Start of file: /tmp/spdk_tgt_config.json.l6z ===' 00:04:09.062 + cat /tmp/spdk_tgt_config.json.l6z 00:04:09.062 + echo '=== End of file: /tmp/spdk_tgt_config.json.l6z ===' 00:04:09.062 + echo '' 00:04:09.062 + rm /tmp/sh-np.fTlaq1.NDH /tmp/spdk_tgt_config.json.l6z 00:04:09.062 + exit 1 00:04:09.062 INFO: configuration change detected. 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:09.062 02:28:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.062 02:28:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@321 -- # [[ -n 46129 ]] 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:09.062 02:28:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:09.062 02:28:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:09.062 02:28:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:09.062 02:28:55 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:09.062 02:28:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:09.321 02:28:56 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:09.321 02:28:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:09.580 02:28:56 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:09.580 02:28:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:09.839 02:28:56 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:09.839 02:28:56 json_config -- json_config/json_config.sh@197 -- # [[ FreeBSD = Linux ]] 00:04:09.839 02:28:56 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:09.839 02:28:56 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:09.839 02:28:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:09.839 02:28:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.839 02:28:56 json_config -- json_config/json_config.sh@327 -- # killprocess 46129 00:04:09.839 02:28:56 json_config -- common/autotest_common.sh@948 -- # '[' -z 46129 ']' 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@952 -- # kill -0 46129 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@953 -- # uname 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46129 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@956 -- # tail -1 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:09.840 killing process with pid 46129 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46129' 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@967 -- # kill 46129 00:04:09.840 02:28:56 json_config -- common/autotest_common.sh@972 -- # wait 46129 00:04:10.100 02:28:56 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:10.100 02:28:56 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:10.100 02:28:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.100 02:28:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.100 02:28:56 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:10.100 INFO: Success 00:04:10.100 02:28:56 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:10.100 00:04:10.100 real 0m9.604s 00:04:10.100 user 0m14.655s 00:04:10.100 sys 0m1.742s 00:04:10.100 02:28:56 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.100 02:28:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.100 ************************************ 00:04:10.100 END TEST json_config 00:04:10.100 ************************************ 00:04:10.100 02:28:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.100 02:28:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:10.100 02:28:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.100 02:28:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.100 02:28:56 -- common/autotest_common.sh@10 -- # set +x 00:04:10.100 ************************************ 00:04:10.100 START TEST json_config_extra_key 00:04:10.100 ************************************ 00:04:10.100 02:28:56 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:10.100 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:10.100 02:28:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:10.359 02:28:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:10.359 02:28:56 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.359 INFO: launching applications... 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:10.359 02:28:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46258 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.359 Waiting for target to run... 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46258 /var/tmp/spdk_tgt.sock 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46258 ']' 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.359 02:28:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.359 02:28:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:10.359 [2024-07-25 02:28:57.007670] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:10.359 [2024-07-25 02:28:57.008011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:10.359 EAL: TSC is not safe to use in SMP mode 00:04:10.359 EAL: TSC is not invariant 00:04:10.359 [2024-07-25 02:28:57.230988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.618 [2024-07-25 02:28:57.318854] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:10.618 [2024-07-25 02:28:57.320602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.187 02:28:57 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.187 02:28:57 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:11.187 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:11.187 INFO: shutting down applications... 00:04:11.187 02:28:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:11.187 02:28:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46258 ]] 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46258 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46258 00:04:11.187 02:28:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46258 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.756 SPDK target shutdown done 00:04:11.756 02:28:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.756 Success 00:04:11.756 02:28:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:11.756 00:04:11.756 real 0m1.563s 00:04:11.756 user 0m1.270s 00:04:11.756 sys 0m0.390s 00:04:11.756 02:28:58 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.756 02:28:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.756 ************************************ 00:04:11.756 END TEST json_config_extra_key 00:04:11.756 ************************************ 00:04:11.756 02:28:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.756 02:28:58 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.756 02:28:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.756 02:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.756 02:28:58 -- common/autotest_common.sh@10 -- # set +x 00:04:11.756 ************************************ 00:04:11.756 START TEST alias_rpc 00:04:11.756 ************************************ 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.756 * Looking for test storage... 00:04:11.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:11.756 02:28:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.756 02:28:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46312 00:04:11.756 02:28:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46312 00:04:11.756 02:28:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46312 ']' 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.756 02:28:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.756 [2024-07-25 02:28:58.628667] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:11.756 [2024-07-25 02:28:58.629001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:12.325 EAL: TSC is not safe to use in SMP mode 00:04:12.325 EAL: TSC is not invariant 00:04:12.325 [2024-07-25 02:28:59.045529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.325 [2024-07-25 02:28:59.137629] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:12.325 [2024-07-25 02:28:59.139367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:12.893 02:28:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:12.893 02:28:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46312 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46312 ']' 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46312 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46312 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:12.893 killing process with pid 46312 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46312' 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@967 -- # kill 46312 00:04:12.893 02:28:59 alias_rpc -- common/autotest_common.sh@972 -- # wait 46312 00:04:13.153 00:04:13.153 real 0m1.533s 00:04:13.153 user 0m1.521s 00:04:13.153 sys 0m0.635s 00:04:13.153 02:28:59 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.153 02:28:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.153 ************************************ 00:04:13.153 END TEST alias_rpc 00:04:13.153 ************************************ 00:04:13.153 02:29:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.153 02:29:00 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:13.153 02:29:00 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.153 02:29:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.153 02:29:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.153 02:29:00 -- common/autotest_common.sh@10 -- # set +x 00:04:13.153 ************************************ 00:04:13.153 START TEST spdkcli_tcp 00:04:13.153 ************************************ 00:04:13.153 02:29:00 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:13.413 * Looking for test storage... 00:04:13.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46377 00:04:13.413 02:29:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46377 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46377 ']' 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.413 02:29:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.413 [2024-07-25 02:29:00.229712] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:13.413 [2024-07-25 02:29:00.229986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:13.981 EAL: TSC is not safe to use in SMP mode 00:04:13.981 EAL: TSC is not invariant 00:04:13.981 [2024-07-25 02:29:00.650832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.981 [2024-07-25 02:29:00.742312] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:13.981 [2024-07-25 02:29:00.742353] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:13.981 [2024-07-25 02:29:00.744576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.981 [2024-07-25 02:29:00.744576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.551 02:29:01 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.551 02:29:01 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:14.551 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46385 00:04:14.551 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.551 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:14.551 [ 00:04:14.551 "spdk_get_version", 00:04:14.551 "rpc_get_methods", 00:04:14.551 "env_dpdk_get_mem_stats", 00:04:14.551 "trace_get_info", 00:04:14.551 "trace_get_tpoint_group_mask", 00:04:14.551 "trace_disable_tpoint_group", 00:04:14.551 "trace_enable_tpoint_group", 00:04:14.551 "trace_clear_tpoint_mask", 00:04:14.551 "trace_set_tpoint_mask", 00:04:14.551 "notify_get_notifications", 00:04:14.551 "notify_get_types", 00:04:14.551 "accel_get_stats", 00:04:14.551 "accel_set_options", 00:04:14.551 "accel_set_driver", 00:04:14.551 "accel_crypto_key_destroy", 00:04:14.551 "accel_crypto_keys_get", 00:04:14.551 "accel_crypto_key_create", 00:04:14.551 "accel_assign_opc", 00:04:14.551 "accel_get_module_info", 00:04:14.551 "accel_get_opc_assignments", 00:04:14.551 "bdev_get_histogram", 00:04:14.551 "bdev_enable_histogram", 00:04:14.551 "bdev_set_qos_limit", 00:04:14.551 "bdev_set_qd_sampling_period", 00:04:14.551 "bdev_get_bdevs", 00:04:14.551 "bdev_reset_iostat", 00:04:14.551 "bdev_get_iostat", 00:04:14.551 "bdev_examine", 00:04:14.551 "bdev_wait_for_examine", 00:04:14.551 "bdev_set_options", 00:04:14.551 "keyring_get_keys", 00:04:14.551 "framework_get_pci_devices", 00:04:14.551 "framework_get_config", 00:04:14.551 "framework_get_subsystems", 00:04:14.551 "sock_get_default_impl", 00:04:14.551 "sock_set_default_impl", 00:04:14.551 "sock_impl_set_options", 00:04:14.551 "sock_impl_get_options", 00:04:14.551 "thread_set_cpumask", 00:04:14.551 "framework_get_governor", 00:04:14.551 "framework_get_scheduler", 00:04:14.551 "framework_set_scheduler", 00:04:14.551 "framework_get_reactors", 00:04:14.551 "thread_get_io_channels", 00:04:14.551 "thread_get_pollers", 00:04:14.551 "thread_get_stats", 00:04:14.551 "framework_monitor_context_switch", 00:04:14.551 "spdk_kill_instance", 00:04:14.551 "log_enable_timestamps", 00:04:14.551 "log_get_flags", 00:04:14.551 "log_clear_flag", 00:04:14.551 "log_set_flag", 00:04:14.551 "log_get_level", 00:04:14.551 "log_set_level", 00:04:14.551 "log_get_print_level", 00:04:14.551 "log_set_print_level", 00:04:14.551 "framework_enable_cpumask_locks", 00:04:14.551 "framework_disable_cpumask_locks", 00:04:14.551 "framework_wait_init", 00:04:14.551 "framework_start_init", 00:04:14.551 "iobuf_get_stats", 00:04:14.551 "iobuf_set_options", 00:04:14.551 "vmd_rescan", 00:04:14.551 "vmd_remove_device", 00:04:14.551 "vmd_enable", 00:04:14.551 "nvmf_stop_mdns_prr", 00:04:14.551 "nvmf_publish_mdns_prr", 00:04:14.551 "nvmf_subsystem_get_listeners", 00:04:14.551 "nvmf_subsystem_get_qpairs", 00:04:14.551 "nvmf_subsystem_get_controllers", 00:04:14.551 "nvmf_get_stats", 00:04:14.551 "nvmf_get_transports", 00:04:14.551 "nvmf_create_transport", 00:04:14.551 "nvmf_get_targets", 00:04:14.551 "nvmf_delete_target", 00:04:14.551 "nvmf_create_target", 00:04:14.551 "nvmf_subsystem_allow_any_host", 00:04:14.551 "nvmf_subsystem_remove_host", 00:04:14.551 "nvmf_subsystem_add_host", 00:04:14.551 "nvmf_ns_remove_host", 00:04:14.551 "nvmf_ns_add_host", 00:04:14.551 "nvmf_subsystem_remove_ns", 00:04:14.551 "nvmf_subsystem_add_ns", 00:04:14.551 "nvmf_subsystem_listener_set_ana_state", 00:04:14.551 "nvmf_discovery_get_referrals", 00:04:14.551 "nvmf_discovery_remove_referral", 00:04:14.551 "nvmf_discovery_add_referral", 00:04:14.551 "nvmf_subsystem_remove_listener", 00:04:14.551 "nvmf_subsystem_add_listener", 00:04:14.551 "nvmf_delete_subsystem", 00:04:14.551 "nvmf_create_subsystem", 00:04:14.551 "nvmf_get_subsystems", 00:04:14.551 "nvmf_set_crdt", 00:04:14.551 "nvmf_set_config", 00:04:14.551 "nvmf_set_max_subsystems", 00:04:14.551 "scsi_get_devices", 00:04:14.551 "iscsi_get_histogram", 00:04:14.551 "iscsi_enable_histogram", 00:04:14.551 "iscsi_set_options", 00:04:14.551 "iscsi_get_auth_groups", 00:04:14.551 "iscsi_auth_group_remove_secret", 00:04:14.551 "iscsi_auth_group_add_secret", 00:04:14.551 "iscsi_delete_auth_group", 00:04:14.551 "iscsi_create_auth_group", 00:04:14.551 "iscsi_set_discovery_auth", 00:04:14.551 "iscsi_get_options", 00:04:14.551 "iscsi_target_node_request_logout", 00:04:14.551 "iscsi_target_node_set_redirect", 00:04:14.551 "iscsi_target_node_set_auth", 00:04:14.551 "iscsi_target_node_add_lun", 00:04:14.551 "iscsi_get_stats", 00:04:14.551 "iscsi_get_connections", 00:04:14.551 "iscsi_portal_group_set_auth", 00:04:14.551 "iscsi_start_portal_group", 00:04:14.551 "iscsi_delete_portal_group", 00:04:14.551 "iscsi_create_portal_group", 00:04:14.551 "iscsi_get_portal_groups", 00:04:14.551 "iscsi_delete_target_node", 00:04:14.551 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.551 "iscsi_target_node_add_pg_ig_maps", 00:04:14.551 "iscsi_create_target_node", 00:04:14.551 "iscsi_get_target_nodes", 00:04:14.551 "iscsi_delete_initiator_group", 00:04:14.551 "iscsi_initiator_group_remove_initiators", 00:04:14.551 "iscsi_initiator_group_add_initiators", 00:04:14.551 "iscsi_create_initiator_group", 00:04:14.551 "iscsi_get_initiator_groups", 00:04:14.551 "keyring_file_remove_key", 00:04:14.551 "keyring_file_add_key", 00:04:14.551 "iaa_scan_accel_module", 00:04:14.551 "dsa_scan_accel_module", 00:04:14.551 "ioat_scan_accel_module", 00:04:14.551 "accel_error_inject_error", 00:04:14.551 "bdev_aio_delete", 00:04:14.551 "bdev_aio_rescan", 00:04:14.551 "bdev_aio_create", 00:04:14.551 "blobfs_create", 00:04:14.551 "blobfs_detect", 00:04:14.551 "blobfs_set_cache_size", 00:04:14.551 "bdev_zone_block_delete", 00:04:14.551 "bdev_zone_block_create", 00:04:14.551 "bdev_delay_delete", 00:04:14.551 "bdev_delay_create", 00:04:14.551 "bdev_delay_update_latency", 00:04:14.551 "bdev_split_delete", 00:04:14.551 "bdev_split_create", 00:04:14.551 "bdev_error_inject_error", 00:04:14.551 "bdev_error_delete", 00:04:14.551 "bdev_error_create", 00:04:14.551 "bdev_raid_set_options", 00:04:14.551 "bdev_raid_remove_base_bdev", 00:04:14.551 "bdev_raid_add_base_bdev", 00:04:14.551 "bdev_raid_delete", 00:04:14.551 "bdev_raid_create", 00:04:14.551 "bdev_raid_get_bdevs", 00:04:14.551 "bdev_lvol_set_parent_bdev", 00:04:14.551 "bdev_lvol_set_parent", 00:04:14.551 "bdev_lvol_check_shallow_copy", 00:04:14.551 "bdev_lvol_start_shallow_copy", 00:04:14.551 "bdev_lvol_grow_lvstore", 00:04:14.551 "bdev_lvol_get_lvols", 00:04:14.551 "bdev_lvol_get_lvstores", 00:04:14.551 "bdev_lvol_delete", 00:04:14.551 "bdev_lvol_set_read_only", 00:04:14.551 "bdev_lvol_resize", 00:04:14.551 "bdev_lvol_decouple_parent", 00:04:14.551 "bdev_lvol_inflate", 00:04:14.551 "bdev_lvol_rename", 00:04:14.551 "bdev_lvol_clone_bdev", 00:04:14.551 "bdev_lvol_clone", 00:04:14.551 "bdev_lvol_snapshot", 00:04:14.551 "bdev_lvol_create", 00:04:14.551 "bdev_lvol_delete_lvstore", 00:04:14.551 "bdev_lvol_rename_lvstore", 00:04:14.552 "bdev_lvol_create_lvstore", 00:04:14.552 "bdev_passthru_delete", 00:04:14.552 "bdev_passthru_create", 00:04:14.552 "bdev_nvme_send_cmd", 00:04:14.552 "bdev_nvme_get_path_iostat", 00:04:14.552 "bdev_nvme_get_mdns_discovery_info", 00:04:14.552 "bdev_nvme_stop_mdns_discovery", 00:04:14.552 "bdev_nvme_start_mdns_discovery", 00:04:14.552 "bdev_nvme_set_multipath_policy", 00:04:14.552 "bdev_nvme_set_preferred_path", 00:04:14.552 "bdev_nvme_get_io_paths", 00:04:14.552 "bdev_nvme_remove_error_injection", 00:04:14.552 "bdev_nvme_add_error_injection", 00:04:14.552 "bdev_nvme_get_discovery_info", 00:04:14.552 "bdev_nvme_stop_discovery", 00:04:14.552 "bdev_nvme_start_discovery", 00:04:14.552 "bdev_nvme_get_controller_health_info", 00:04:14.552 "bdev_nvme_disable_controller", 00:04:14.552 "bdev_nvme_enable_controller", 00:04:14.552 "bdev_nvme_reset_controller", 00:04:14.552 "bdev_nvme_get_transport_statistics", 00:04:14.552 "bdev_nvme_apply_firmware", 00:04:14.552 "bdev_nvme_detach_controller", 00:04:14.552 "bdev_nvme_get_controllers", 00:04:14.552 "bdev_nvme_attach_controller", 00:04:14.552 "bdev_nvme_set_hotplug", 00:04:14.552 "bdev_nvme_set_options", 00:04:14.552 "bdev_null_resize", 00:04:14.552 "bdev_null_delete", 00:04:14.552 "bdev_null_create", 00:04:14.552 "bdev_malloc_delete", 00:04:14.552 "bdev_malloc_create" 00:04:14.552 ] 00:04:14.552 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.552 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.552 02:29:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46377 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46377 ']' 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46377 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46377 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:14.552 killing process with pid 46377 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46377' 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46377 00:04:14.552 02:29:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46377 00:04:14.812 00:04:14.812 real 0m1.571s 00:04:14.812 user 0m2.337s 00:04:14.812 sys 0m0.675s 00:04:14.812 02:29:01 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.812 02:29:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.812 ************************************ 00:04:14.812 END TEST spdkcli_tcp 00:04:14.812 ************************************ 00:04:14.812 02:29:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.812 02:29:01 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.812 02:29:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.812 02:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.812 02:29:01 -- common/autotest_common.sh@10 -- # set +x 00:04:14.812 ************************************ 00:04:14.812 START TEST dpdk_mem_utility 00:04:14.812 ************************************ 00:04:14.812 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:15.071 * Looking for test storage... 00:04:15.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:15.071 02:29:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:15.071 02:29:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46452 00:04:15.071 02:29:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46452 00:04:15.071 02:29:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.071 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46452 ']' 00:04:15.071 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.071 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.072 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.072 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.072 02:29:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.072 [2024-07-25 02:29:01.859681] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:15.072 [2024-07-25 02:29:01.860031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:15.641 EAL: TSC is not safe to use in SMP mode 00:04:15.641 EAL: TSC is not invariant 00:04:15.641 [2024-07-25 02:29:02.279553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.641 [2024-07-25 02:29:02.373061] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:15.641 [2024-07-25 02:29:02.374967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.901 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.901 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:15.901 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.901 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.901 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.901 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.901 { 00:04:15.901 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.901 } 00:04:15.901 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.901 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:16.161 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:16.161 1 heaps totaling size 2048.000000 MiB 00:04:16.161 size: 2048.000000 MiB heap id: 0 00:04:16.161 end heaps---------- 00:04:16.161 8 mempools totaling size 592.563660 MiB 00:04:16.161 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:16.161 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:16.161 size: 84.500549 MiB name: bdev_io_46452 00:04:16.161 size: 51.008362 MiB name: evtpool_46452 00:04:16.161 size: 50.000549 MiB name: msgpool_46452 00:04:16.161 size: 21.758911 MiB name: PDU_Pool 00:04:16.161 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:16.161 size: 0.026123 MiB name: Session_Pool 00:04:16.161 end mempools------- 00:04:16.161 6 memzones totaling size 4.142822 MiB 00:04:16.161 size: 1.000366 MiB name: RG_ring_0_46452 00:04:16.161 size: 1.000366 MiB name: RG_ring_1_46452 00:04:16.161 size: 1.000366 MiB name: RG_ring_4_46452 00:04:16.161 size: 1.000366 MiB name: RG_ring_5_46452 00:04:16.161 size: 0.125366 MiB name: RG_ring_2_46452 00:04:16.161 size: 0.015991 MiB name: RG_ring_3_46452 00:04:16.161 end memzones------- 00:04:16.161 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:16.161 heap id: 0 total size: 2048.000000 MiB number of busy elements: 41 number of free elements: 3 00:04:16.161 list of free elements. size: 1254.071655 MiB 00:04:16.161 element at address: 0x1060000000 with size: 1253.760681 MiB 00:04:16.161 element at address: 0x10e0000000 with size: 0.179688 MiB 00:04:16.161 element at address: 0x10e0400000 with size: 0.131287 MiB 00:04:16.161 list of standard malloc elements. size: 197.218201 MiB 00:04:16.161 element at address: 0x10c7bfff80 with size: 132.000122 MiB 00:04:16.161 element at address: 0x10e58b5f80 with size: 64.000122 MiB 00:04:16.161 element at address: 0x10e02fff80 with size: 1.000122 MiB 00:04:16.161 element at address: 0x10effd9f00 with size: 0.140747 MiB 00:04:16.161 element at address: 0x10e0421a80 with size: 0.062622 MiB 00:04:16.161 element at address: 0x10efffdf80 with size: 0.007935 MiB 00:04:16.161 element at address: 0x10e98b6480 with size: 0.000305 MiB 00:04:16.161 element at address: 0x10e002e000 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e002e0c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e002e180 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e002e240 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e002e300 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e0034f00 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e0035100 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e00351c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e0035280 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e003d540 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e003d600 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e003d6c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e003d780 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e04219c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6000 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b60c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6180 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6240 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6300 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b63c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b65c0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6680 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6880 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98b6940 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98d6c00 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e98d6cc0 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e99d6f80 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e9ad7240 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10e9ad7300 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10eccd7640 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10eccd7840 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10eccd7900 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10efed7c40 with size: 0.000183 MiB 00:04:16.161 element at address: 0x10effd9e40 with size: 0.000183 MiB 00:04:16.161 list of memzone associated elements. size: 596.710144 MiB 00:04:16.161 element at address: 0x10b93ba640 with size: 211.013000 MiB 00:04:16.161 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:16.161 element at address: 0x10afa453c0 with size: 152.449524 MiB 00:04:16.161 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:16.161 element at address: 0x10e0431b00 with size: 84.000122 MiB 00:04:16.161 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46452_0 00:04:16.161 element at address: 0x10eccd79c0 with size: 48.000122 MiB 00:04:16.161 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46452_0 00:04:16.161 element at address: 0x10e9ad73c0 with size: 48.000122 MiB 00:04:16.161 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46452_0 00:04:16.161 element at address: 0x10c67bfcc0 with size: 20.250671 MiB 00:04:16.161 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:16.161 element at address: 0x10ae6c2dc0 with size: 18.000671 MiB 00:04:16.161 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:16.161 element at address: 0x10efcd7a40 with size: 2.000488 MiB 00:04:16.161 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46452 00:04:16.161 element at address: 0x10ecad7440 with size: 2.000488 MiB 00:04:16.161 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46452 00:04:16.161 element at address: 0x10efed7d00 with size: 1.008118 MiB 00:04:16.161 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46452 00:04:16.161 element at address: 0x10e00fdc40 with size: 1.008118 MiB 00:04:16.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:16.161 element at address: 0x10c66bdb80 with size: 1.008118 MiB 00:04:16.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:16.161 element at address: 0x10b92b8500 with size: 1.008118 MiB 00:04:16.161 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:16.161 element at address: 0x10af943280 with size: 1.008118 MiB 00:04:16.161 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:16.161 element at address: 0x10e99d7040 with size: 1.000488 MiB 00:04:16.161 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46452 00:04:16.161 element at address: 0x10e98d6d80 with size: 1.000488 MiB 00:04:16.161 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46452 00:04:16.161 element at address: 0x10e01ffd80 with size: 1.000488 MiB 00:04:16.161 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46452 00:04:16.161 element at address: 0x10ae5c2bc0 with size: 1.000488 MiB 00:04:16.161 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46452 00:04:16.161 element at address: 0x10e5831b80 with size: 0.500488 MiB 00:04:16.161 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46452 00:04:16.161 element at address: 0x10e007da40 with size: 0.500488 MiB 00:04:16.161 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:16.161 element at address: 0x10af8c3080 with size: 0.500488 MiB 00:04:16.161 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:16.161 element at address: 0x10e003d840 with size: 0.250488 MiB 00:04:16.161 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:16.161 element at address: 0x10e98b6a00 with size: 0.125488 MiB 00:04:16.161 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46452 00:04:16.161 element at address: 0x10e0035340 with size: 0.031738 MiB 00:04:16.161 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:16.161 element at address: 0x10e002e3c0 with size: 0.023743 MiB 00:04:16.161 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:16.161 element at address: 0x10e58b1d80 with size: 0.016113 MiB 00:04:16.161 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46452 00:04:16.161 element at address: 0x10e0034500 with size: 0.002441 MiB 00:04:16.161 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:16.161 element at address: 0x10eccd7700 with size: 0.000305 MiB 00:04:16.161 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46452 00:04:16.161 element at address: 0x10e98b6740 with size: 0.000305 MiB 00:04:16.161 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46452 00:04:16.161 element at address: 0x10e0034fc0 with size: 0.000305 MiB 00:04:16.161 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:16.161 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:16.161 02:29:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46452 00:04:16.161 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46452 ']' 00:04:16.161 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46452 00:04:16.161 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:16.161 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:16.161 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46452 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:16.162 killing process with pid 46452 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46452' 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46452 00:04:16.162 02:29:02 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46452 00:04:16.421 00:04:16.421 real 0m1.471s 00:04:16.421 user 0m1.439s 00:04:16.421 sys 0m0.601s 00:04:16.421 02:29:03 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.421 02:29:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:16.421 ************************************ 00:04:16.421 END TEST dpdk_mem_utility 00:04:16.421 ************************************ 00:04:16.421 02:29:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:16.421 02:29:03 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:16.421 02:29:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.421 02:29:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.421 02:29:03 -- common/autotest_common.sh@10 -- # set +x 00:04:16.421 ************************************ 00:04:16.421 START TEST event 00:04:16.421 ************************************ 00:04:16.421 02:29:03 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:16.680 * Looking for test storage... 00:04:16.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:16.680 02:29:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:16.680 02:29:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.680 02:29:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.680 02:29:03 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:16.680 02:29:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.680 02:29:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.680 ************************************ 00:04:16.680 START TEST event_perf 00:04:16.680 ************************************ 00:04:16.680 02:29:03 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.680 Running I/O for 1 seconds...[2024-07-25 02:29:03.391539] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:16.680 [2024-07-25 02:29:03.391882] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:16.939 EAL: TSC is not safe to use in SMP mode 00:04:16.939 EAL: TSC is not invariant 00:04:16.939 [2024-07-25 02:29:03.815067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:17.197 [2024-07-25 02:29:03.907331] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:17.197 [2024-07-25 02:29:03.907361] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:17.197 [2024-07-25 02:29:03.907383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:17.197 [2024-07-25 02:29:03.907388] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:17.197 Running I/O for 1 seconds...[2024-07-25 02:29:03.910398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.197 [2024-07-25 02:29:03.910693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.197 [2024-07-25 02:29:03.910553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:17.197 [2024-07-25 02:29:03.910696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:18.133 00:04:18.133 lcore 0: 2665273 00:04:18.133 lcore 1: 2665270 00:04:18.133 lcore 2: 2665271 00:04:18.133 lcore 3: 2665271 00:04:18.133 done. 00:04:18.133 00:04:18.133 real 0m1.638s 00:04:18.133 user 0m4.173s 00:04:18.133 sys 0m0.459s 00:04:18.133 02:29:05 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.133 02:29:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.133 ************************************ 00:04:18.133 END TEST event_perf 00:04:18.133 ************************************ 00:04:18.391 02:29:05 event -- common/autotest_common.sh@1142 -- # return 0 00:04:18.392 02:29:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:18.392 02:29:05 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:18.392 02:29:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.392 02:29:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.392 ************************************ 00:04:18.392 START TEST event_reactor 00:04:18.392 ************************************ 00:04:18.392 02:29:05 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:18.392 [2024-07-25 02:29:05.077094] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:18.392 [2024-07-25 02:29:05.077429] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:18.650 EAL: TSC is not safe to use in SMP mode 00:04:18.650 EAL: TSC is not invariant 00:04:18.650 [2024-07-25 02:29:05.497748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.909 [2024-07-25 02:29:05.590356] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:18.909 [2024-07-25 02:29:05.592043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.847 test_start 00:04:19.848 oneshot 00:04:19.848 tick 100 00:04:19.848 tick 100 00:04:19.848 tick 250 00:04:19.848 tick 100 00:04:19.848 tick 100 00:04:19.848 tick 100 00:04:19.848 tick 250 00:04:19.848 tick 500 00:04:19.848 tick 100 00:04:19.848 tick 100 00:04:19.848 tick 250 00:04:19.848 tick 100 00:04:19.848 tick 100 00:04:19.848 test_end 00:04:19.848 00:04:19.848 real 0m1.634s 00:04:19.848 user 0m1.173s 00:04:19.848 sys 0m0.458s 00:04:19.848 02:29:06 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.848 02:29:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:19.848 ************************************ 00:04:19.848 END TEST event_reactor 00:04:19.848 ************************************ 00:04:20.107 02:29:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:20.107 02:29:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:20.107 02:29:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:20.107 02:29:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.107 02:29:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.107 ************************************ 00:04:20.107 START TEST event_reactor_perf 00:04:20.107 ************************************ 00:04:20.107 02:29:06 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:20.107 [2024-07-25 02:29:06.765503] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:20.107 [2024-07-25 02:29:06.765845] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:20.366 EAL: TSC is not safe to use in SMP mode 00:04:20.366 EAL: TSC is not invariant 00:04:20.366 [2024-07-25 02:29:07.180740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.626 [2024-07-25 02:29:07.272107] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:20.626 [2024-07-25 02:29:07.273772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.563 test_start 00:04:21.563 test_end 00:04:21.563 Performance: 5071944 events per second 00:04:21.563 00:04:21.563 real 0m1.628s 00:04:21.563 user 0m1.168s 00:04:21.563 sys 0m0.456s 00:04:21.563 02:29:08 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.563 02:29:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 ************************************ 00:04:21.563 END TEST event_reactor_perf 00:04:21.563 ************************************ 00:04:21.563 02:29:08 event -- common/autotest_common.sh@1142 -- # return 0 00:04:21.563 02:29:08 event -- event/event.sh@49 -- # uname -s 00:04:21.563 02:29:08 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:21.563 00:04:21.563 real 0m5.239s 00:04:21.563 user 0m6.697s 00:04:21.563 sys 0m1.577s 00:04:21.563 02:29:08 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.563 02:29:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 ************************************ 00:04:21.563 END TEST event 00:04:21.563 ************************************ 00:04:21.823 02:29:08 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.823 02:29:08 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:21.823 02:29:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.823 02:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.823 02:29:08 -- common/autotest_common.sh@10 -- # set +x 00:04:21.823 ************************************ 00:04:21.823 START TEST thread 00:04:21.823 ************************************ 00:04:21.823 02:29:08 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:21.823 * Looking for test storage... 00:04:21.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:21.823 02:29:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:21.823 02:29:08 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:21.823 02:29:08 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.823 02:29:08 thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.823 ************************************ 00:04:21.823 START TEST thread_poller_perf 00:04:21.823 ************************************ 00:04:21.823 02:29:08 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:21.823 [2024-07-25 02:29:08.677280] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:21.823 [2024-07-25 02:29:08.677623] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:22.392 EAL: TSC is not safe to use in SMP mode 00:04:22.392 EAL: TSC is not invariant 00:04:22.392 [2024-07-25 02:29:09.098720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.392 [2024-07-25 02:29:09.189679] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:22.392 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:22.392 [2024-07-25 02:29:09.191364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.774 ====================================== 00:04:23.774 busy:2295929316 (cyc) 00:04:23.774 total_run_count: 7949000 00:04:23.774 tsc_hz: 2294609042 (cyc) 00:04:23.774 ====================================== 00:04:23.774 poller_cost: 288 (cyc), 125 (nsec) 00:04:23.774 00:04:23.774 real 0m1.637s 00:04:23.774 user 0m1.177s 00:04:23.774 sys 0m0.457s 00:04:23.774 02:29:10 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.774 02:29:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:23.774 ************************************ 00:04:23.774 END TEST thread_poller_perf 00:04:23.774 ************************************ 00:04:23.774 02:29:10 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:23.774 02:29:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:23.774 02:29:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:23.774 02:29:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.774 02:29:10 thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.774 ************************************ 00:04:23.774 START TEST thread_poller_perf 00:04:23.774 ************************************ 00:04:23.774 02:29:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:23.774 [2024-07-25 02:29:10.366137] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:23.774 [2024-07-25 02:29:10.366486] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:24.033 EAL: TSC is not safe to use in SMP mode 00:04:24.033 EAL: TSC is not invariant 00:04:24.033 [2024-07-25 02:29:10.780641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.033 [2024-07-25 02:29:10.871540] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:24.034 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:24.034 [2024-07-25 02:29:10.873226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.411 ====================================== 00:04:25.411 busy:2295387040 (cyc) 00:04:25.411 total_run_count: 105051000 00:04:25.411 tsc_hz: 2294609042 (cyc) 00:04:25.411 ====================================== 00:04:25.411 poller_cost: 21 (cyc), 9 (nsec) 00:04:25.411 00:04:25.411 real 0m1.628s 00:04:25.411 user 0m1.174s 00:04:25.411 sys 0m0.443s 00:04:25.411 02:29:11 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.411 02:29:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:25.411 ************************************ 00:04:25.411 END TEST thread_poller_perf 00:04:25.411 ************************************ 00:04:25.411 02:29:12 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:25.411 02:29:12 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:25.411 02:29:12 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:25.411 02:29:12 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.411 02:29:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.411 02:29:12 thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.411 ************************************ 00:04:25.411 START TEST thread_spdk_lock 00:04:25.411 ************************************ 00:04:25.411 02:29:12 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:25.411 [2024-07-25 02:29:12.045752] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:25.411 [2024-07-25 02:29:12.046054] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:25.670 EAL: TSC is not safe to use in SMP mode 00:04:25.670 EAL: TSC is not invariant 00:04:25.670 [2024-07-25 02:29:12.467828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.670 [2024-07-25 02:29:12.558913] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:25.670 [2024-07-25 02:29:12.558943] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:25.670 [2024-07-25 02:29:12.561173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.671 [2024-07-25 02:29:12.561173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.239 [2024-07-25 02:29:12.995642] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:26.239 [2024-07-25 02:29:12.995676] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:26.239 [2024-07-25 02:29:12.995682] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x316be0 00:04:26.239 [2024-07-25 02:29:12.996008] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:26.239 [2024-07-25 02:29:12.996108] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:26.239 [2024-07-25 02:29:12.996118] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:26.239 Starting test contend 00:04:26.239 Worker Delay Wait us Hold us Total us 00:04:26.239 0 3 262384 160680 423065 00:04:26.239 1 5 161608 263306 424915 00:04:26.239 PASS test contend 00:04:26.239 Starting test hold_by_poller 00:04:26.239 PASS test hold_by_poller 00:04:26.239 Starting test hold_by_message 00:04:26.239 PASS test hold_by_message 00:04:26.239 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:26.239 100014 assertions passed 00:04:26.239 0 assertions failed 00:04:26.239 00:04:26.239 real 0m1.071s 00:04:26.239 user 0m1.028s 00:04:26.239 sys 0m0.472s 00:04:26.239 02:29:13 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.239 02:29:13 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:26.239 ************************************ 00:04:26.239 END TEST thread_spdk_lock 00:04:26.239 ************************************ 00:04:26.499 02:29:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:26.499 00:04:26.499 real 0m4.671s 00:04:26.499 user 0m3.508s 00:04:26.499 sys 0m1.644s 00:04:26.499 02:29:13 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.499 02:29:13 thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.499 ************************************ 00:04:26.499 END TEST thread 00:04:26.499 ************************************ 00:04:26.499 02:29:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.499 02:29:13 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:26.499 02:29:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.499 02:29:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.499 02:29:13 -- common/autotest_common.sh@10 -- # set +x 00:04:26.499 ************************************ 00:04:26.499 START TEST accel 00:04:26.499 ************************************ 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:26.499 * Looking for test storage... 00:04:26.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:26.499 02:29:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:26.499 02:29:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:26.499 02:29:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:26.499 02:29:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46752 00:04:26.499 02:29:13 accel -- accel/accel.sh@63 -- # waitforlisten 46752 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@829 -- # '[' -z 46752 ']' 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.499 02:29:13 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.8qWqhv 00:04:26.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.499 02:29:13 accel -- common/autotest_common.sh@10 -- # set +x 00:04:26.759 [2024-07-25 02:29:13.398933] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:26.759 [2024-07-25 02:29:13.399219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:27.018 EAL: TSC is not safe to use in SMP mode 00:04:27.018 EAL: TSC is not invariant 00:04:27.018 [2024-07-25 02:29:13.818701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.278 [2024-07-25 02:29:13.910644] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:27.278 02:29:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:27.278 02:29:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:27.278 02:29:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:27.278 02:29:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:27.278 02:29:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:27.278 02:29:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:27.278 02:29:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:27.278 02:29:13 accel -- accel/accel.sh@41 -- # jq -r . 00:04:27.278 [2024-07-25 02:29:13.924335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@862 -- # return 0 00:04:27.538 02:29:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:27.538 02:29:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:27.538 02:29:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:27.538 02:29:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:27.538 02:29:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:27.538 02:29:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:27.538 02:29:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # IFS== 00:04:27.538 02:29:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:27.538 02:29:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:27.538 02:29:14 accel -- accel/accel.sh@75 -- # killprocess 46752 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@948 -- # '[' -z 46752 ']' 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@952 -- # kill -0 46752 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@953 -- # uname 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46752 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@956 -- # tail -1 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:27.538 killing process with pid 46752 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46752' 00:04:27.538 02:29:14 accel -- common/autotest_common.sh@967 -- # kill 46752 00:04:27.539 02:29:14 accel -- common/autotest_common.sh@972 -- # wait 46752 00:04:27.798 02:29:14 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:27.798 02:29:14 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:27.798 02:29:14 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:27.798 02:29:14 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.cfVb5V -h 00:04:27.798 02:29:14 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.798 02:29:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:27.798 02:29:14 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.798 02:29:14 accel -- common/autotest_common.sh@10 -- # set +x 00:04:27.798 ************************************ 00:04:27.798 START TEST accel_missing_filename 00:04:27.798 ************************************ 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.798 02:29:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:27.798 02:29:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.odhvTh -t 1 -w compress 00:04:27.798 [2024-07-25 02:29:14.654059] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:27.798 [2024-07-25 02:29:14.654394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:28.366 EAL: TSC is not safe to use in SMP mode 00:04:28.366 EAL: TSC is not invariant 00:04:28.366 [2024-07-25 02:29:15.070069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.366 [2024-07-25 02:29:15.161478] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:28.366 02:29:15 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:28.366 [2024-07-25 02:29:15.175142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.366 [2024-07-25 02:29:15.177278] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.366 [2024-07-25 02:29:15.205635] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:28.625 A filename is required. 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:28.625 00:04:28.625 real 0m0.680s 00:04:28.625 user 0m0.218s 00:04:28.625 sys 0m0.461s 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.625 02:29:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:28.625 ************************************ 00:04:28.625 END TEST accel_missing_filename 00:04:28.625 ************************************ 00:04:28.625 02:29:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:28.625 02:29:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:28.625 02:29:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:28.625 02:29:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.625 02:29:15 accel -- common/autotest_common.sh@10 -- # set +x 00:04:28.625 ************************************ 00:04:28.625 START TEST accel_compress_verify 00:04:28.625 ************************************ 00:04:28.625 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:28.625 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:28.625 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:28.625 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:28.625 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.626 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:28.626 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.626 02:29:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:28.626 02:29:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yYsgAw -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:28.626 [2024-07-25 02:29:15.387448] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:28.626 [2024-07-25 02:29:15.387788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:29.196 EAL: TSC is not safe to use in SMP mode 00:04:29.196 EAL: TSC is not invariant 00:04:29.196 [2024-07-25 02:29:15.806180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.196 [2024-07-25 02:29:15.899295] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:29.196 02:29:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:29.196 [2024-07-25 02:29:15.913379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.196 [2024-07-25 02:29:15.915410] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.196 [2024-07-25 02:29:15.943614] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:29.196 00:04:29.196 Compression does not support the verify option, aborting. 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.196 00:04:29.196 real 0m0.683s 00:04:29.196 user 0m0.229s 00:04:29.196 sys 0m0.453s 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.196 02:29:16 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:29.196 ************************************ 00:04:29.196 END TEST accel_compress_verify 00:04:29.196 ************************************ 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:29.456 02:29:16 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 START TEST accel_wrong_workload 00:04:29.456 ************************************ 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:29.456 02:29:16 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wR46EA -t 1 -w foobar 00:04:29.456 Unsupported workload type: foobar 00:04:29.456 [2024-07-25 02:29:16.140803] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:29.456 accel_perf options: 00:04:29.456 [-h help message] 00:04:29.456 [-q queue depth per core] 00:04:29.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:29.456 [-T number of threads per core 00:04:29.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:29.456 [-t time in seconds] 00:04:29.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:29.456 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:29.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:29.456 [-l for compress/decompress workloads, name of uncompressed input file 00:04:29.456 [-S for crc32c workload, use this seed value (default 0) 00:04:29.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:29.456 [-f for fill workload, use this BYTE value (default 255) 00:04:29.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:29.456 [-y verify result if this switch is on] 00:04:29.456 [-a tasks to allocate per core (default: same value as -q)] 00:04:29.456 Can be used to spread operations across a wider range of memory. 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.456 00:04:29.456 real 0m0.015s 00:04:29.456 user 0m0.005s 00:04:29.456 sys 0m0.009s 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.456 02:29:16 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 END TEST accel_wrong_workload 00:04:29.456 ************************************ 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:29.456 02:29:16 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.456 02:29:16 accel -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 START TEST accel_negative_buffers 00:04:29.456 ************************************ 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:29.456 02:29:16 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jJbN4u -t 1 -w xor -y -x -1 00:04:29.456 -x option must be non-negative. 00:04:29.456 [2024-07-25 02:29:16.218295] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:29.456 accel_perf options: 00:04:29.456 [-h help message] 00:04:29.456 [-q queue depth per core] 00:04:29.456 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:29.456 [-T number of threads per core 00:04:29.456 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:29.456 [-t time in seconds] 00:04:29.456 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:29.456 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:29.456 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:29.456 [-l for compress/decompress workloads, name of uncompressed input file 00:04:29.456 [-S for crc32c workload, use this seed value (default 0) 00:04:29.456 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:29.456 [-f for fill workload, use this BYTE value (default 255) 00:04:29.456 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:29.456 [-y verify result if this switch is on] 00:04:29.456 [-a tasks to allocate per core (default: same value as -q)] 00:04:29.456 Can be used to spread operations across a wider range of memory. 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.456 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.457 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.457 00:04:29.457 real 0m0.015s 00:04:29.457 user 0m0.015s 00:04:29.457 sys 0m0.001s 00:04:29.457 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.457 02:29:16 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:29.457 ************************************ 00:04:29.457 END TEST accel_negative_buffers 00:04:29.457 ************************************ 00:04:29.457 02:29:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:29.457 02:29:16 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:29.457 02:29:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:29.457 02:29:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.457 02:29:16 accel -- common/autotest_common.sh@10 -- # set +x 00:04:29.457 ************************************ 00:04:29.457 START TEST accel_crc32c 00:04:29.457 ************************************ 00:04:29.457 02:29:16 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:29.457 02:29:16 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.HGf8U7 -t 1 -w crc32c -S 32 -y 00:04:29.457 [2024-07-25 02:29:16.295514] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:29.457 [2024-07-25 02:29:16.295862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:30.025 EAL: TSC is not safe to use in SMP mode 00:04:30.025 EAL: TSC is not invariant 00:04:30.025 [2024-07-25 02:29:16.715334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.025 [2024-07-25 02:29:16.807887] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:30.025 [2024-07-25 02:29:16.821774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.025 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:30.026 02:29:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:31.403 02:29:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:31.403 00:04:31.403 real 0m1.688s 00:04:31.403 user 0m1.221s 00:04:31.403 sys 0m0.481s 00:04:31.403 02:29:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.403 02:29:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:31.403 ************************************ 00:04:31.403 END TEST accel_crc32c 00:04:31.403 ************************************ 00:04:31.403 02:29:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:31.403 02:29:18 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:31.403 02:29:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:31.403 02:29:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.403 02:29:18 accel -- common/autotest_common.sh@10 -- # set +x 00:04:31.403 ************************************ 00:04:31.403 START TEST accel_crc32c_C2 00:04:31.403 ************************************ 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:31.403 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MRSG9m -t 1 -w crc32c -y -C 2 00:04:31.403 [2024-07-25 02:29:18.046692] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:31.403 [2024-07-25 02:29:18.047056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:31.662 EAL: TSC is not safe to use in SMP mode 00:04:31.662 EAL: TSC is not invariant 00:04:31.662 [2024-07-25 02:29:18.474507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.920 [2024-07-25 02:29:18.568904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:31.920 [2024-07-25 02:29:18.582402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.920 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:31.921 02:29:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:32.859 00:04:32.859 real 0m1.694s 00:04:32.859 user 0m1.236s 00:04:32.859 sys 0m0.472s 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.859 02:29:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:32.859 ************************************ 00:04:32.859 END TEST accel_crc32c_C2 00:04:32.859 ************************************ 00:04:33.119 02:29:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:33.119 02:29:19 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:33.119 02:29:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:33.119 02:29:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.119 02:29:19 accel -- common/autotest_common.sh@10 -- # set +x 00:04:33.119 ************************************ 00:04:33.119 START TEST accel_copy 00:04:33.119 ************************************ 00:04:33.119 02:29:19 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:33.119 02:29:19 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uuefTu -t 1 -w copy -y 00:04:33.119 [2024-07-25 02:29:19.800948] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:33.119 [2024-07-25 02:29:19.801253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:33.378 EAL: TSC is not safe to use in SMP mode 00:04:33.378 EAL: TSC is not invariant 00:04:33.378 [2024-07-25 02:29:20.216427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.638 [2024-07-25 02:29:20.308849] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:33.638 [2024-07-25 02:29:20.322590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:33.638 02:29:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.576 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:34.835 02:29:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:34.835 00:04:34.835 real 0m1.682s 00:04:34.835 user 0m1.239s 00:04:34.835 sys 0m0.454s 00:04:34.835 02:29:21 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.835 02:29:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:34.835 ************************************ 00:04:34.835 END TEST accel_copy 00:04:34.835 ************************************ 00:04:34.835 02:29:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:34.835 02:29:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:34.835 02:29:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:34.835 02:29:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.835 02:29:21 accel -- common/autotest_common.sh@10 -- # set +x 00:04:34.835 ************************************ 00:04:34.835 START TEST accel_fill 00:04:34.835 ************************************ 00:04:34.835 02:29:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:34.835 02:29:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:34.835 02:29:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:34.835 02:29:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:34.836 02:29:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:34.836 02:29:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:34.836 02:29:21 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.eE53eQ -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:34.836 [2024-07-25 02:29:21.535046] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:34.836 [2024-07-25 02:29:21.535382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:35.095 EAL: TSC is not safe to use in SMP mode 00:04:35.095 EAL: TSC is not invariant 00:04:35.095 [2024-07-25 02:29:21.959563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.354 [2024-07-25 02:29:22.050529] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:35.354 [2024-07-25 02:29:22.064771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:35.354 02:29:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.732 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:36.733 02:29:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:36.733 00:04:36.733 real 0m1.689s 00:04:36.733 user 0m1.215s 00:04:36.733 sys 0m0.490s 00:04:36.733 02:29:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.733 02:29:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:36.733 ************************************ 00:04:36.733 END TEST accel_fill 00:04:36.733 ************************************ 00:04:36.733 02:29:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:36.733 02:29:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:36.733 02:29:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:36.733 02:29:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.733 02:29:23 accel -- common/autotest_common.sh@10 -- # set +x 00:04:36.733 ************************************ 00:04:36.733 START TEST accel_copy_crc32c 00:04:36.733 ************************************ 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:36.733 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iwBior -t 1 -w copy_crc32c -y 00:04:36.733 [2024-07-25 02:29:23.290789] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:36.733 [2024-07-25 02:29:23.291125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:36.992 EAL: TSC is not safe to use in SMP mode 00:04:36.992 EAL: TSC is not invariant 00:04:36.992 [2024-07-25 02:29:23.711323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.992 [2024-07-25 02:29:23.803612] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:36.992 [2024-07-25 02:29:23.817254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.992 02:29:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:38.372 00:04:38.372 real 0m1.688s 00:04:38.372 user 0m1.213s 00:04:38.372 sys 0m0.489s 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.372 02:29:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:38.372 ************************************ 00:04:38.372 END TEST accel_copy_crc32c 00:04:38.372 ************************************ 00:04:38.372 02:29:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:38.372 02:29:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:38.372 02:29:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:38.372 02:29:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.372 02:29:25 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.372 ************************************ 00:04:38.372 START TEST accel_copy_crc32c_C2 00:04:38.372 ************************************ 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:38.372 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Ltddrt -t 1 -w copy_crc32c -y -C 2 00:04:38.372 [2024-07-25 02:29:25.034721] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:38.372 [2024-07-25 02:29:25.035070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:38.632 EAL: TSC is not safe to use in SMP mode 00:04:38.632 EAL: TSC is not invariant 00:04:38.632 [2024-07-25 02:29:25.458734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.892 [2024-07-25 02:29:25.538011] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:38.892 [2024-07-25 02:29:25.552153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.892 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.893 02:29:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:39.842 00:04:39.842 real 0m1.679s 00:04:39.842 user 0m1.216s 00:04:39.842 sys 0m0.479s 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.842 02:29:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:39.842 ************************************ 00:04:39.842 END TEST accel_copy_crc32c_C2 00:04:39.842 ************************************ 00:04:40.102 02:29:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:40.102 02:29:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:40.102 02:29:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:40.102 02:29:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.102 02:29:26 accel -- common/autotest_common.sh@10 -- # set +x 00:04:40.102 ************************************ 00:04:40.102 START TEST accel_dualcast 00:04:40.102 ************************************ 00:04:40.102 02:29:26 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:04:40.102 02:29:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:40.102 02:29:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:40.102 02:29:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.102 02:29:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.102 02:29:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:40.103 02:29:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GCbzau -t 1 -w dualcast -y 00:04:40.103 [2024-07-25 02:29:26.766827] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:40.103 [2024-07-25 02:29:26.767102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:40.362 EAL: TSC is not safe to use in SMP mode 00:04:40.362 EAL: TSC is not invariant 00:04:40.362 [2024-07-25 02:29:27.194872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.622 [2024-07-25 02:29:27.287854] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:40.622 [2024-07-25 02:29:27.301758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:40.622 02:29:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:04:41.561 02:29:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:41.561 00:04:41.561 real 0m1.695s 00:04:41.561 user 0m1.252s 00:04:41.561 sys 0m0.457s 00:04:41.561 02:29:28 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.561 02:29:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:04:41.561 ************************************ 00:04:41.561 END TEST accel_dualcast 00:04:41.561 ************************************ 00:04:41.822 02:29:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:41.822 02:29:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:41.822 02:29:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:41.822 02:29:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.822 02:29:28 accel -- common/autotest_common.sh@10 -- # set +x 00:04:41.822 ************************************ 00:04:41.822 START TEST accel_compare 00:04:41.822 ************************************ 00:04:41.822 02:29:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:41.822 02:29:28 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.SpjSJ7 -t 1 -w compare -y 00:04:41.822 [2024-07-25 02:29:28.526386] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:41.822 [2024-07-25 02:29:28.526655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.082 EAL: TSC is not safe to use in SMP mode 00:04:42.082 EAL: TSC is not invariant 00:04:42.082 [2024-07-25 02:29:28.947038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.342 [2024-07-25 02:29:29.037703] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:04:42.342 [2024-07-25 02:29:29.051615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:42.342 02:29:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:04:43.723 02:29:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:43.723 00:04:43.723 real 0m1.689s 00:04:43.723 user 0m1.241s 00:04:43.723 sys 0m0.460s 00:04:43.723 02:29:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.723 02:29:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:04:43.723 ************************************ 00:04:43.723 END TEST accel_compare 00:04:43.723 ************************************ 00:04:43.723 02:29:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:43.723 02:29:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:43.723 02:29:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:43.723 02:29:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.723 02:29:30 accel -- common/autotest_common.sh@10 -- # set +x 00:04:43.724 ************************************ 00:04:43.724 START TEST accel_xor 00:04:43.724 ************************************ 00:04:43.724 02:29:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:43.724 02:29:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.x7MvEO -t 1 -w xor -y 00:04:43.724 [2024-07-25 02:29:30.267867] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:43.724 [2024-07-25 02:29:30.268203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:43.984 EAL: TSC is not safe to use in SMP mode 00:04:43.984 EAL: TSC is not invariant 00:04:43.984 [2024-07-25 02:29:30.684286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.984 [2024-07-25 02:29:30.763498] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:43.984 [2024-07-25 02:29:30.777362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:43.984 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:43.985 02:29:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:45.366 00:04:45.366 real 0m1.672s 00:04:45.366 user 0m1.230s 00:04:45.366 sys 0m0.458s 00:04:45.366 02:29:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.366 02:29:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:45.366 ************************************ 00:04:45.366 END TEST accel_xor 00:04:45.366 ************************************ 00:04:45.366 02:29:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:45.366 02:29:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:45.366 02:29:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:45.366 02:29:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.366 02:29:31 accel -- common/autotest_common.sh@10 -- # set +x 00:04:45.366 ************************************ 00:04:45.366 START TEST accel_xor 00:04:45.366 ************************************ 00:04:45.366 02:29:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:45.366 02:29:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ymIVWx -t 1 -w xor -y -x 3 00:04:45.366 [2024-07-25 02:29:31.994969] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:45.366 [2024-07-25 02:29:31.995306] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:45.626 EAL: TSC is not safe to use in SMP mode 00:04:45.626 EAL: TSC is not invariant 00:04:45.626 [2024-07-25 02:29:32.413341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.626 [2024-07-25 02:29:32.504537] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:45.626 02:29:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:45.886 [2024-07-25 02:29:32.518178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:45.886 02:29:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:46.825 02:29:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:46.825 00:04:46.825 real 0m1.685s 00:04:46.825 user 0m1.243s 00:04:46.825 sys 0m0.457s 00:04:46.825 02:29:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.825 02:29:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:46.825 ************************************ 00:04:46.825 END TEST accel_xor 00:04:46.825 ************************************ 00:04:46.825 02:29:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:46.825 02:29:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:46.825 02:29:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:46.825 02:29:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.825 02:29:33 accel -- common/autotest_common.sh@10 -- # set +x 00:04:47.085 ************************************ 00:04:47.085 START TEST accel_dif_verify 00:04:47.085 ************************************ 00:04:47.085 02:29:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:47.085 02:29:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.BcF2vh -t 1 -w dif_verify 00:04:47.085 [2024-07-25 02:29:33.740444] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:47.085 [2024-07-25 02:29:33.740783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:47.348 EAL: TSC is not safe to use in SMP mode 00:04:47.348 EAL: TSC is not invariant 00:04:47.348 [2024-07-25 02:29:34.159506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.609 [2024-07-25 02:29:34.249943] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:04:47.609 [2024-07-25 02:29:34.264842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.609 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:47.610 02:29:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:04:48.549 02:29:35 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:48.549 00:04:48.549 real 0m1.686s 00:04:48.549 user 0m1.240s 00:04:48.549 sys 0m0.453s 00:04:48.549 02:29:35 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.549 02:29:35 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:04:48.549 ************************************ 00:04:48.549 END TEST accel_dif_verify 00:04:48.549 ************************************ 00:04:48.809 02:29:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:48.809 02:29:35 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:04:48.809 02:29:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:48.809 02:29:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.809 02:29:35 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 ************************************ 00:04:48.809 START TEST accel_dif_generate 00:04:48.809 ************************************ 00:04:48.809 02:29:35 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:04:48.809 02:29:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.WhMikA -t 1 -w dif_generate 00:04:48.809 [2024-07-25 02:29:35.479332] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:48.809 [2024-07-25 02:29:35.479614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:49.068 EAL: TSC is not safe to use in SMP mode 00:04:49.068 EAL: TSC is not invariant 00:04:49.068 [2024-07-25 02:29:35.899981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.328 [2024-07-25 02:29:35.989946] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:04:49.328 02:29:35 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:04:49.328 [2024-07-25 02:29:36.003631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.328 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.328 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.328 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.328 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:49.329 02:29:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:04:50.267 02:29:37 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:50.267 00:04:50.267 real 0m1.685s 00:04:50.267 user 0m1.221s 00:04:50.267 sys 0m0.482s 00:04:50.267 02:29:37 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.267 02:29:37 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:04:50.267 ************************************ 00:04:50.267 END TEST accel_dif_generate 00:04:50.267 ************************************ 00:04:50.527 02:29:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:50.527 02:29:37 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:04:50.527 02:29:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:50.527 02:29:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.527 02:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.527 ************************************ 00:04:50.527 START TEST accel_dif_generate_copy 00:04:50.527 ************************************ 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:04:50.527 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DpRemW -t 1 -w dif_generate_copy 00:04:50.527 [2024-07-25 02:29:37.214562] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:50.527 [2024-07-25 02:29:37.214892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:50.787 EAL: TSC is not safe to use in SMP mode 00:04:50.787 EAL: TSC is not invariant 00:04:50.787 [2024-07-25 02:29:37.633866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.046 [2024-07-25 02:29:37.725258] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:04:51.046 [2024-07-25 02:29:37.738969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:04:51.046 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:51.047 02:29:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.426 00:04:52.426 real 0m1.686s 00:04:52.426 user 0m1.233s 00:04:52.426 sys 0m0.465s 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.426 02:29:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:04:52.426 ************************************ 00:04:52.426 END TEST accel_dif_generate_copy 00:04:52.426 ************************************ 00:04:52.426 02:29:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:52.426 02:29:38 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:04:52.426 02:29:38 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.426 02:29:38 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:52.426 02:29:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.426 02:29:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.426 ************************************ 00:04:52.426 START TEST accel_comp 00:04:52.426 ************************************ 00:04:52.426 02:29:38 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.426 02:29:38 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Fj4WKN -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.426 [2024-07-25 02:29:38.950912] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:52.426 [2024-07-25 02:29:38.951253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:52.686 EAL: TSC is not safe to use in SMP mode 00:04:52.686 EAL: TSC is not invariant 00:04:52.686 [2024-07-25 02:29:39.368648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.686 [2024-07-25 02:29:39.461507] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:04:52.686 [2024-07-25 02:29:39.475383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.686 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:52.687 02:29:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:04:54.067 02:29:40 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.067 00:04:54.067 real 0m1.688s 00:04:54.067 user 0m1.248s 00:04:54.067 sys 0m0.458s 00:04:54.067 02:29:40 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.067 02:29:40 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:04:54.067 ************************************ 00:04:54.067 END TEST accel_comp 00:04:54.067 ************************************ 00:04:54.067 02:29:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:54.067 02:29:40 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.067 02:29:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:54.067 02:29:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.067 02:29:40 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.067 ************************************ 00:04:54.067 START TEST accel_decomp 00:04:54.067 ************************************ 00:04:54.067 02:29:40 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.067 02:29:40 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iZPv3o -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:54.067 [2024-07-25 02:29:40.692118] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:54.067 [2024-07-25 02:29:40.692450] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:54.327 EAL: TSC is not safe to use in SMP mode 00:04:54.327 EAL: TSC is not invariant 00:04:54.327 [2024-07-25 02:29:41.115733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.327 [2024-07-25 02:29:41.205547] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:04:54.327 02:29:41 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:04:54.587 [2024-07-25 02:29:41.219216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.587 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:54.588 02:29:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.532 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:55.533 02:29:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.533 00:04:55.533 real 0m1.691s 00:04:55.533 user 0m1.219s 00:04:55.533 sys 0m0.488s 00:04:55.533 02:29:42 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.533 02:29:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:04:55.533 ************************************ 00:04:55.533 END TEST accel_decomp 00:04:55.533 ************************************ 00:04:55.533 02:29:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.533 02:29:42 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:55.533 02:29:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:04:55.533 02:29:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.533 02:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.801 ************************************ 00:04:55.801 START TEST accel_decomp_full 00:04:55.801 ************************************ 00:04:55.801 02:29:42 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:55.801 02:29:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.oQ7peM -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:55.801 [2024-07-25 02:29:42.435233] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:55.801 [2024-07-25 02:29:42.435579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:56.060 EAL: TSC is not safe to use in SMP mode 00:04:56.060 EAL: TSC is not invariant 00:04:56.060 [2024-07-25 02:29:42.863069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.320 [2024-07-25 02:29:42.953855] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:04:56.320 [2024-07-25 02:29:42.966542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.320 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:56.321 02:29:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:57.256 02:29:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.256 00:04:57.256 real 0m1.703s 00:04:57.256 user 0m1.232s 00:04:57.256 sys 0m0.487s 00:04:57.256 02:29:44 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.256 02:29:44 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:04:57.256 ************************************ 00:04:57.256 END TEST accel_decomp_full 00:04:57.256 ************************************ 00:04:57.516 02:29:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.516 02:29:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:57.516 02:29:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:04:57.516 02:29:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.516 02:29:44 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.516 ************************************ 00:04:57.516 START TEST accel_decomp_mcore 00:04:57.516 ************************************ 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:57.516 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.mCn4g7 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:57.516 [2024-07-25 02:29:44.188987] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:57.516 [2024-07-25 02:29:44.189316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:57.792 EAL: TSC is not safe to use in SMP mode 00:04:57.792 EAL: TSC is not invariant 00:04:57.792 [2024-07-25 02:29:44.613897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.052 [2024-07-25 02:29:44.706968] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:58.052 [2024-07-25 02:29:44.706998] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:58.052 [2024-07-25 02:29:44.707020] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:58.052 [2024-07-25 02:29:44.707026] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:04:58.052 [2024-07-25 02:29:44.722148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.052 [2024-07-25 02:29:44.722284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.052 [2024-07-25 02:29:44.722223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.052 [2024-07-25 02:29:44.722283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.052 02:29:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.991 00:04:58.991 real 0m1.698s 00:04:58.991 user 0m4.359s 00:04:58.991 sys 0m0.462s 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.991 02:29:45 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:04:58.991 ************************************ 00:04:58.991 END TEST accel_decomp_mcore 00:04:58.991 ************************************ 00:04:59.251 02:29:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:59.251 02:29:45 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:59.251 02:29:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:59.251 02:29:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.251 02:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.251 ************************************ 00:04:59.251 START TEST accel_decomp_full_mcore 00:04:59.251 ************************************ 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:59.251 02:29:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hJnRUu -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:59.251 [2024-07-25 02:29:45.936690] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:04:59.251 [2024-07-25 02:29:45.937030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:59.511 EAL: TSC is not safe to use in SMP mode 00:04:59.511 EAL: TSC is not invariant 00:04:59.511 [2024-07-25 02:29:46.365325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.771 [2024-07-25 02:29:46.458738] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.771 [2024-07-25 02:29:46.458773] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:59.771 [2024-07-25 02:29:46.458780] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:59.771 [2024-07-25 02:29:46.458785] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:04:59.771 [2024-07-25 02:29:46.473807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.771 [2024-07-25 02:29:46.474378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.771 [2024-07-25 02:29:46.474339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.771 [2024-07-25 02:29:46.474378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.771 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:59.772 02:29:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.154 00:05:01.154 real 0m1.718s 00:05:01.154 user 0m4.378s 00:05:01.154 sys 0m0.487s 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.154 02:29:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:01.154 ************************************ 00:05:01.154 END TEST accel_decomp_full_mcore 00:05:01.154 ************************************ 00:05:01.154 02:29:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.154 02:29:47 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:01.154 02:29:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:01.154 02:29:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.154 02:29:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.154 ************************************ 00:05:01.154 START TEST accel_decomp_mthread 00:05:01.154 ************************************ 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:01.154 02:29:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2cFGX3 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:01.154 [2024-07-25 02:29:47.704081] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:01.154 [2024-07-25 02:29:47.704436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:01.415 EAL: TSC is not safe to use in SMP mode 00:05:01.415 EAL: TSC is not invariant 00:05:01.415 [2024-07-25 02:29:48.126533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.415 [2024-07-25 02:29:48.218313] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:01.415 [2024-07-25 02:29:48.232149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:01.415 02:29:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.796 00:05:02.796 real 0m1.695s 00:05:02.796 user 0m1.248s 00:05:02.796 sys 0m0.463s 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.796 02:29:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:02.796 ************************************ 00:05:02.796 END TEST accel_decomp_mthread 00:05:02.796 ************************************ 00:05:02.796 02:29:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.796 02:29:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:02.796 02:29:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:02.796 02:29:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.796 02:29:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.796 ************************************ 00:05:02.796 START TEST accel_decomp_full_mthread 00:05:02.796 ************************************ 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:02.796 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GOFkOI -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:02.796 [2024-07-25 02:29:49.449848] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:02.796 [2024-07-25 02:29:49.450186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:03.063 EAL: TSC is not safe to use in SMP mode 00:05:03.063 EAL: TSC is not invariant 00:05:03.063 [2024-07-25 02:29:49.869230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.339 [2024-07-25 02:29:49.962348] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:03.339 [2024-07-25 02:29:49.976110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.339 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:03.340 02:29:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.278 00:05:04.278 real 0m1.718s 00:05:04.278 user 0m1.267s 00:05:04.278 sys 0m0.467s 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.278 02:29:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:04.278 ************************************ 00:05:04.278 END TEST accel_decomp_full_mthread 00:05:04.279 ************************************ 00:05:04.538 02:29:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.538 02:29:51 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:04.538 02:29:51 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.BUQTYS 00:05:04.538 02:29:51 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:04.538 02:29:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.538 02:29:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.538 ************************************ 00:05:04.539 START TEST accel_dif_functional_tests 00:05:04.539 ************************************ 00:05:04.539 02:29:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.BUQTYS 00:05:04.539 [2024-07-25 02:29:51.219865] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:04.539 [2024-07-25 02:29:51.220180] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:04.798 EAL: TSC is not safe to use in SMP mode 00:05:04.798 EAL: TSC is not invariant 00:05:04.798 [2024-07-25 02:29:51.641837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.058 [2024-07-25 02:29:51.734717] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:05.058 [2024-07-25 02:29:51.734755] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:05.058 [2024-07-25 02:29:51.734761] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:05.058 02:29:51 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:05.058 02:29:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.058 02:29:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.058 02:29:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.058 02:29:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.058 02:29:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.058 02:29:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:05.058 02:29:51 accel -- accel/accel.sh@41 -- # jq -r . 00:05:05.058 [2024-07-25 02:29:51.749527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.058 [2024-07-25 02:29:51.749451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.058 [2024-07-25 02:29:51.749525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.058 00:05:05.058 00:05:05.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.058 http://cunit.sourceforge.net/ 00:05:05.058 00:05:05.058 00:05:05.058 Suite: accel_dif 00:05:05.058 Test: verify: DIF generated, GUARD check ...passed 00:05:05.058 Test: verify: DIF generated, APPTAG check ...passed 00:05:05.058 Test: verify: DIF generated, REFTAG check ...passed 00:05:05.058 Test: verify: DIF not generated, GUARD check ...passed 00:05:05.058 Test: verify: DIF not generated, APPTAG check ...passed 00:05:05.058 Test: verify: DIF not generated, REFTAG check ...passed 00:05:05.058 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:05.058 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:05.058 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:05.058 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:05.058 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:05.058 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 02:29:51.764916] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:05.058 [2024-07-25 02:29:51.764974] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:05.058 [2024-07-25 02:29:51.765007] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:05.058 [2024-07-25 02:29:51.765056] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:05.058 [2024-07-25 02:29:51.765128] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:05.058 passed 00:05:05.058 Test: verify copy: DIF generated, GUARD check ...passed 00:05:05.058 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:05.058 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:05.058 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:05.058 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:05.058 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 02:29:51.765225] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:05.058 [2024-07-25 02:29:51.765257] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:05.058 passed 00:05:05.058 Test: generate copy: DIF generated, GUARD check ...passed 00:05:05.058 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:05.058 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:05.058 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-25 02:29:51.765289] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:05.058 passed 00:05:05.058 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:05.058 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:05.058 Test: generate copy: iovecs-len validate ...[2024-07-25 02:29:51.765423] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:05.058 passed 00:05:05.058 Test: generate copy: buffer alignment validate ...passed 00:05:05.058 00:05:05.058 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.058 suites 1 1 n/a 0 0 00:05:05.058 tests 26 26 26 0 0 00:05:05.058 asserts 115 115 115 0 n/a 00:05:05.058 00:05:05.058 Elapsed time = 0.016 seconds 00:05:05.058 00:05:05.058 real 0m0.718s 00:05:05.058 user 0m0.366s 00:05:05.058 sys 0m0.476s 00:05:05.058 02:29:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.058 02:29:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:05.058 ************************************ 00:05:05.058 END TEST accel_dif_functional_tests 00:05:05.058 ************************************ 00:05:05.318 02:29:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:05.318 00:05:05.318 real 0m38.761s 00:05:05.318 user 0m33.463s 00:05:05.318 sys 0m12.229s 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:05.318 02:29:51 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.318 02:29:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.318 ************************************ 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.318 END TEST accel 00:05:05.318 ************************************ 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:05.318 02:29:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:05.318 02:29:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:05.318 02:29:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:05.318 02:29:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.318 02:29:52 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:05.318 02:29:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.318 02:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.318 02:29:52 -- common/autotest_common.sh@10 -- # set +x 00:05:05.318 ************************************ 00:05:05.318 START TEST accel_rpc 00:05:05.318 ************************************ 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:05.318 * Looking for test storage... 00:05:05.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:05.318 02:29:52 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.318 02:29:52 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:05.318 02:29:52 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47502 00:05:05.318 02:29:52 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47502 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47502 ']' 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.318 02:29:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.318 [2024-07-25 02:29:52.200015] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:05.318 [2024-07-25 02:29:52.200290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:05.887 EAL: TSC is not safe to use in SMP mode 00:05:05.887 EAL: TSC is not invariant 00:05:05.887 [2024-07-25 02:29:52.621274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.887 [2024-07-25 02:29:52.711457] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:05.887 [2024-07-25 02:29:52.713155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 ************************************ 00:05:06.457 START TEST accel_assign_opcode 00:05:06.457 ************************************ 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 [2024-07-25 02:29:53.129380] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 [2024-07-25 02:29:53.141373] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.457 software 00:05:06.457 00:05:06.457 real 0m0.080s 00:05:06.457 user 0m0.020s 00:05:06.457 sys 0m0.009s 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.457 02:29:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:06.457 ************************************ 00:05:06.457 END TEST accel_assign_opcode 00:05:06.457 ************************************ 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.457 02:29:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47502 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47502 ']' 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47502 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47502 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:06.457 killing process with pid 47502 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47502' 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 47502 00:05:06.457 02:29:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 47502 00:05:06.716 ************************************ 00:05:06.716 END TEST accel_rpc 00:05:06.716 ************************************ 00:05:06.716 00:05:06.716 real 0m1.482s 00:05:06.716 user 0m1.289s 00:05:06.716 sys 0m0.727s 00:05:06.716 02:29:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.716 02:29:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.716 02:29:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.716 02:29:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:06.716 02:29:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.716 02:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.716 02:29:53 -- common/autotest_common.sh@10 -- # set +x 00:05:06.716 ************************************ 00:05:06.716 START TEST app_cmdline 00:05:06.716 ************************************ 00:05:06.716 02:29:53 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:06.975 * Looking for test storage... 00:05:06.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:06.975 02:29:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:06.975 02:29:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47580 00:05:06.975 02:29:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47580 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47580 ']' 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.975 02:29:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:06.975 02:29:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:06.975 [2024-07-25 02:29:53.720728] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:06.975 [2024-07-25 02:29:53.721064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:07.543 EAL: TSC is not safe to use in SMP mode 00:05:07.543 EAL: TSC is not invariant 00:05:07.543 [2024-07-25 02:29:54.143289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.543 [2024-07-25 02:29:54.234651] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:07.543 [2024-07-25 02:29:54.236375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.803 02:29:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.803 02:29:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:07.803 02:29:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:08.063 { 00:05:08.063 "version": "SPDK v24.09-pre git sha1 c8a637412", 00:05:08.063 "fields": { 00:05:08.063 "major": 24, 00:05:08.063 "minor": 9, 00:05:08.063 "patch": 0, 00:05:08.063 "suffix": "-pre", 00:05:08.063 "commit": "c8a637412" 00:05:08.063 } 00:05:08.063 } 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:08.063 02:29:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:08.063 02:29:54 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:08.324 request: 00:05:08.324 { 00:05:08.324 "method": "env_dpdk_get_mem_stats", 00:05:08.324 "req_id": 1 00:05:08.324 } 00:05:08.324 Got JSON-RPC error response 00:05:08.324 response: 00:05:08.324 { 00:05:08.324 "code": -32601, 00:05:08.324 "message": "Method not found" 00:05:08.324 } 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.324 02:29:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47580 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47580 ']' 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47580 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47580 00:05:08.324 02:29:54 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:05:08.324 02:29:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:08.324 02:29:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:08.324 killing process with pid 47580 00:05:08.324 02:29:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47580' 00:05:08.324 02:29:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 47580 00:05:08.324 02:29:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 47580 00:05:08.583 00:05:08.583 real 0m1.692s 00:05:08.583 user 0m1.867s 00:05:08.583 sys 0m0.662s 00:05:08.583 02:29:55 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.583 02:29:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.583 ************************************ 00:05:08.583 END TEST app_cmdline 00:05:08.583 ************************************ 00:05:08.583 02:29:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.583 02:29:55 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:08.583 02:29:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.583 02:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.583 02:29:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.583 ************************************ 00:05:08.583 START TEST version 00:05:08.583 ************************************ 00:05:08.583 02:29:55 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:08.583 * Looking for test storage... 00:05:08.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:08.583 02:29:55 version -- app/version.sh@17 -- # get_header_version major 00:05:08.583 02:29:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:08.583 02:29:55 version -- app/version.sh@14 -- # cut -f2 00:05:08.583 02:29:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.583 02:29:55 version -- app/version.sh@17 -- # major=24 00:05:08.583 02:29:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:08.583 02:29:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:08.583 02:29:55 version -- app/version.sh@14 -- # cut -f2 00:05:08.843 02:29:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.843 02:29:55 version -- app/version.sh@18 -- # minor=9 00:05:08.843 02:29:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:08.843 02:29:55 version -- app/version.sh@14 -- # cut -f2 00:05:08.843 02:29:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:08.843 02:29:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.843 02:29:55 version -- app/version.sh@19 -- # patch=0 00:05:08.843 02:29:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:08.843 02:29:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:08.843 02:29:55 version -- app/version.sh@14 -- # cut -f2 00:05:08.843 02:29:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:08.843 02:29:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:08.843 02:29:55 version -- app/version.sh@22 -- # version=24.9 00:05:08.843 02:29:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:08.843 02:29:55 version -- app/version.sh@28 -- # version=24.9rc0 00:05:08.843 02:29:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:08.843 02:29:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:08.843 02:29:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:08.843 02:29:55 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:08.843 00:05:08.843 real 0m0.256s 00:05:08.843 user 0m0.185s 00:05:08.843 sys 0m0.160s 00:05:08.843 02:29:55 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.843 02:29:55 version -- common/autotest_common.sh@10 -- # set +x 00:05:08.843 ************************************ 00:05:08.843 END TEST version 00:05:08.843 ************************************ 00:05:08.843 02:29:55 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.843 02:29:55 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:08.843 02:29:55 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:08.843 02:29:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.843 02:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.843 02:29:55 -- common/autotest_common.sh@10 -- # set +x 00:05:08.843 ************************************ 00:05:08.843 START TEST blockdev_general 00:05:08.843 ************************************ 00:05:08.843 02:29:55 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:09.102 * Looking for test storage... 00:05:09.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:09.102 02:29:55 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@673 -- # '[' FreeBSD = Linux ']' 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@678 -- # PRE_RESERVED_MEM=2048 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47715 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:09.102 02:29:55 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47715 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47715 ']' 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.102 02:29:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:09.102 [2024-07-25 02:29:55.780901] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:09.102 [2024-07-25 02:29:55.781220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:09.362 EAL: TSC is not safe to use in SMP mode 00:05:09.362 EAL: TSC is not invariant 00:05:09.362 [2024-07-25 02:29:56.199139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.622 [2024-07-25 02:29:56.289807] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:09.622 [2024-07-25 02:29:56.291488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.881 02:29:56 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.881 02:29:56 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:05:09.881 02:29:56 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:09.881 02:29:56 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:05:09.881 02:29:56 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:09.881 02:29:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.881 02:29:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:09.881 [2024-07-25 02:29:56.735396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:09.881 [2024-07-25 02:29:56.735434] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:09.881 00:05:09.881 [2024-07-25 02:29:56.743383] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:09.881 [2024-07-25 02:29:56.743396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:09.881 00:05:09.881 Malloc0 00:05:09.881 Malloc1 00:05:10.150 Malloc2 00:05:10.150 Malloc3 00:05:10.150 Malloc4 00:05:10.150 Malloc5 00:05:10.150 Malloc6 00:05:10.150 Malloc7 00:05:10.150 Malloc8 00:05:10.150 Malloc9 00:05:10.150 [2024-07-25 02:29:56.835391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:10.150 [2024-07-25 02:29:56.835418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.150 [2024-07-25 02:29:56.835448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27ca0743a980 00:05:10.150 [2024-07-25 02:29:56.835454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.150 [2024-07-25 02:29:56.835735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.150 [2024-07-25 02:29:56.835754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:10.150 TestPT 00:05:10.150 02:29:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.150 02:29:56 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:10.150 5000+0 records in 00:05:10.150 5000+0 records out 00:05:10.150 10240000 bytes transferred in 0.032057 secs (319429808 bytes/sec) 00:05:10.151 02:29:56 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 AIO0 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.151 02:29:56 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.151 02:29:56 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:05:10.151 02:29:56 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.151 02:29:56 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.151 02:29:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 02:29:57 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.151 02:29:57 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:10.151 02:29:57 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.151 02:29:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.151 02:29:57 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.426 02:29:57 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:10.426 02:29:57 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:10.426 02:29:57 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:10.426 02:29:57 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.426 02:29:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.426 02:29:57 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.426 02:29:57 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:10.426 02:29:57 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:10.427 02:29:57 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c8605196-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c8605196-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1a516c43-784b-1858-9c02-500cb2304967"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1a516c43-784b-1858-9c02-500cb2304967",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "11669106-45b5-3757-b7ef-4ac84623c7c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11669106-45b5-3757-b7ef-4ac84623c7c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b9dcf93d-18ac-de5e-8113-9c8b02214f88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b9dcf93d-18ac-de5e-8113-9c8b02214f88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f5c76cb7-5ec9-3150-8f52-a0c301973eff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5c76cb7-5ec9-3150-8f52-a0c301973eff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5d75afe7-85d0-195c-99a2-f46356600f5f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5d75afe7-85d0-195c-99a2-f46356600f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8a4b3d7e-501d-825b-9a07-a43094e270d8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a4b3d7e-501d-825b-9a07-a43094e270d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e5e790ab-541c-aa5c-aa94-75cd0adbc7db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e5e790ab-541c-aa5c-aa94-75cd0adbc7db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5e7a2ed3-60af-9e58-b705-19554a7eef47"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5e7a2ed3-60af-9e58-b705-19554a7eef47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c86e63f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c865334f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c8666bc3-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c86f9400-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c867a43d-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c868dcc7-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c870cc56-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c86a153f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c86be9fd-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c87a91f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c87a91f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:10.427 02:29:57 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:10.427 02:29:57 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:05:10.427 02:29:57 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:10.427 02:29:57 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 47715 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47715 ']' 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47715 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47715 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:10.427 killing process with pid 47715 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47715' 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@967 -- # kill 47715 00:05:10.427 02:29:57 blockdev_general -- common/autotest_common.sh@972 -- # wait 47715 00:05:10.687 02:29:57 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:10.687 02:29:57 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:10.687 02:29:57 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.687 02:29:57 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.687 02:29:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:10.687 ************************************ 00:05:10.687 START TEST bdev_hello_world 00:05:10.687 ************************************ 00:05:10.687 02:29:57 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:10.687 [2024-07-25 02:29:57.501718] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:10.687 [2024-07-25 02:29:57.502024] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:11.257 EAL: TSC is not safe to use in SMP mode 00:05:11.257 EAL: TSC is not invariant 00:05:11.257 [2024-07-25 02:29:57.922398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.257 [2024-07-25 02:29:58.014901] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:11.257 [2024-07-25 02:29:58.016607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.257 [2024-07-25 02:29:58.072053] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:11.257 [2024-07-25 02:29:58.072080] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:11.257 [2024-07-25 02:29:58.080037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:11.257 [2024-07-25 02:29:58.080053] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:11.257 [2024-07-25 02:29:58.088048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.257 [2024-07-25 02:29:58.088064] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:11.257 [2024-07-25 02:29:58.088070] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:11.257 [2024-07-25 02:29:58.136058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:11.257 [2024-07-25 02:29:58.136108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.257 [2024-07-25 02:29:58.136115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2c2f76c36800 00:05:11.257 [2024-07-25 02:29:58.136121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.257 [2024-07-25 02:29:58.136461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.257 [2024-07-25 02:29:58.136485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:11.517 [2024-07-25 02:29:58.236155] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:11.517 [2024-07-25 02:29:58.236197] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:11.517 [2024-07-25 02:29:58.236206] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:11.517 [2024-07-25 02:29:58.236217] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:11.517 [2024-07-25 02:29:58.236228] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:11.517 [2024-07-25 02:29:58.236233] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:11.517 [2024-07-25 02:29:58.236242] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:11.517 00:05:11.517 [2024-07-25 02:29:58.236248] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:11.777 00:05:11.777 real 0m0.956s 00:05:11.777 user 0m0.500s 00:05:11.777 sys 0m0.455s 00:05:11.777 02:29:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.777 02:29:58 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:11.777 ************************************ 00:05:11.777 END TEST bdev_hello_world 00:05:11.777 ************************************ 00:05:11.777 02:29:58 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:11.778 02:29:58 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:11.778 02:29:58 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:11.778 02:29:58 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.778 02:29:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:11.778 ************************************ 00:05:11.778 START TEST bdev_bounds 00:05:11.778 ************************************ 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=47767 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:11.778 Process bdevio pid: 47767 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 47767' 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 47767 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47767 ']' 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.778 02:29:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:11.778 [2024-07-25 02:29:58.516600] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:11.778 [2024-07-25 02:29:58.517022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:12.347 EAL: TSC is not safe to use in SMP mode 00:05:12.347 EAL: TSC is not invariant 00:05:12.347 [2024-07-25 02:29:58.958965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.347 [2024-07-25 02:29:59.050194] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:12.347 [2024-07-25 02:29:59.050256] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:12.347 [2024-07-25 02:29:59.050262] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:12.347 [2024-07-25 02:29:59.053173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.347 [2024-07-25 02:29:59.053080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.347 [2024-07-25 02:29:59.053176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.347 [2024-07-25 02:29:59.108928] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:12.347 [2024-07-25 02:29:59.108971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:12.347 [2024-07-25 02:29:59.116911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:12.347 [2024-07-25 02:29:59.116928] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:12.347 [2024-07-25 02:29:59.124926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:12.347 [2024-07-25 02:29:59.124942] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:12.347 [2024-07-25 02:29:59.124948] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:12.347 [2024-07-25 02:29:59.172929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:12.347 [2024-07-25 02:29:59.172971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.347 [2024-07-25 02:29:59.172979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x249540a36800 00:05:12.347 [2024-07-25 02:29:59.172985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.347 [2024-07-25 02:29:59.173271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.347 [2024-07-25 02:29:59.173304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:12.607 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.607 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:05:12.607 02:29:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:12.866 I/O targets: 00:05:12.866 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:12.866 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:12.866 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:12.866 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:12.866 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:12.866 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:12.866 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:12.866 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:12.866 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:12.866 00:05:12.866 00:05:12.866 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.866 http://cunit.sourceforge.net/ 00:05:12.866 00:05:12.866 00:05:12.866 Suite: bdevio tests on: AIO0 00:05:12.866 Test: blockdev write read block ...passed 00:05:12.866 Test: blockdev write zeroes read block ...passed 00:05:12.866 Test: blockdev write zeroes read no split ...passed 00:05:12.866 Test: blockdev write zeroes read split ...passed 00:05:12.866 Test: blockdev write zeroes read split partial ...passed 00:05:12.866 Test: blockdev reset ...passed 00:05:12.866 Test: blockdev write read 8 blocks ...passed 00:05:12.866 Test: blockdev write read size > 128k ...passed 00:05:12.866 Test: blockdev write read invalid size ...passed 00:05:12.866 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:12.866 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:12.866 Test: blockdev write read max offset ...passed 00:05:12.866 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:12.866 Test: blockdev writev readv 8 blocks ...passed 00:05:12.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:12.867 Test: blockdev writev readv block ...passed 00:05:12.867 Test: blockdev writev readv size > 128k ...passed 00:05:12.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:12.867 Test: blockdev comparev and writev ...passed 00:05:12.867 Test: blockdev nvme passthru rw ...passed 00:05:12.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:12.867 Test: blockdev nvme admin passthru ...passed 00:05:12.867 Test: blockdev copy ...passed 00:05:12.867 Suite: bdevio tests on: raid1 00:05:12.867 Test: blockdev write read block ...passed 00:05:12.867 Test: blockdev write zeroes read block ...passed 00:05:12.867 Test: blockdev write zeroes read no split ...passed 00:05:12.867 Test: blockdev write zeroes read split ...passed 00:05:12.867 Test: blockdev write zeroes read split partial ...passed 00:05:12.867 Test: blockdev reset ...passed 00:05:12.867 Test: blockdev write read 8 blocks ...passed 00:05:12.867 Test: blockdev write read size > 128k ...passed 00:05:12.867 Test: blockdev write read invalid size ...passed 00:05:12.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:12.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:12.867 Test: blockdev write read max offset ...passed 00:05:12.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:12.867 Test: blockdev writev readv 8 blocks ...passed 00:05:12.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:12.867 Test: blockdev writev readv block ...passed 00:05:12.867 Test: blockdev writev readv size > 128k ...passed 00:05:12.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:12.867 Test: blockdev comparev and writev ...passed 00:05:12.867 Test: blockdev nvme passthru rw ...passed 00:05:12.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:12.867 Test: blockdev nvme admin passthru ...passed 00:05:12.867 Test: blockdev copy ...passed 00:05:12.867 Suite: bdevio tests on: concat0 00:05:12.867 Test: blockdev write read block ...passed 00:05:12.867 Test: blockdev write zeroes read block ...passed 00:05:12.867 Test: blockdev write zeroes read no split ...passed 00:05:12.867 Test: blockdev write zeroes read split ...passed 00:05:12.867 Test: blockdev write zeroes read split partial ...passed 00:05:12.867 Test: blockdev reset ...passed 00:05:12.867 Test: blockdev write read 8 blocks ...passed 00:05:12.867 Test: blockdev write read size > 128k ...passed 00:05:12.867 Test: blockdev write read invalid size ...passed 00:05:12.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:12.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:12.867 Test: blockdev write read max offset ...passed 00:05:12.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:12.867 Test: blockdev writev readv 8 blocks ...passed 00:05:12.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:12.867 Test: blockdev writev readv block ...passed 00:05:12.867 Test: blockdev writev readv size > 128k ...passed 00:05:12.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:12.867 Test: blockdev comparev and writev ...passed 00:05:12.867 Test: blockdev nvme passthru rw ...passed 00:05:12.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:12.867 Test: blockdev nvme admin passthru ...passed 00:05:12.867 Test: blockdev copy ...passed 00:05:12.867 Suite: bdevio tests on: raid0 00:05:12.867 Test: blockdev write read block ...passed 00:05:12.867 Test: blockdev write zeroes read block ...passed 00:05:12.867 Test: blockdev write zeroes read no split ...passed 00:05:12.867 Test: blockdev write zeroes read split ...passed 00:05:12.867 Test: blockdev write zeroes read split partial ...passed 00:05:12.867 Test: blockdev reset ...passed 00:05:12.867 Test: blockdev write read 8 blocks ...passed 00:05:12.867 Test: blockdev write read size > 128k ...passed 00:05:12.867 Test: blockdev write read invalid size ...passed 00:05:12.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:12.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:12.867 Test: blockdev write read max offset ...passed 00:05:12.867 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:12.867 Test: blockdev writev readv 8 blocks ...passed 00:05:12.867 Test: blockdev writev readv 30 x 1block ...passed 00:05:12.867 Test: blockdev writev readv block ...passed 00:05:12.867 Test: blockdev writev readv size > 128k ...passed 00:05:12.867 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:12.867 Test: blockdev comparev and writev ...passed 00:05:12.867 Test: blockdev nvme passthru rw ...passed 00:05:12.867 Test: blockdev nvme passthru vendor specific ...passed 00:05:12.867 Test: blockdev nvme admin passthru ...passed 00:05:12.867 Test: blockdev copy ...passed 00:05:12.867 Suite: bdevio tests on: TestPT 00:05:12.867 Test: blockdev write read block ...passed 00:05:12.867 Test: blockdev write zeroes read block ...passed 00:05:12.867 Test: blockdev write zeroes read no split ...passed 00:05:12.867 Test: blockdev write zeroes read split ...passed 00:05:12.867 Test: blockdev write zeroes read split partial ...passed 00:05:12.867 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.148 Test: blockdev write read invalid size ...passed 00:05:13.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.148 Test: blockdev write read max offset ...passed 00:05:13.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.148 Test: blockdev writev readv 8 blocks ...passed 00:05:13.148 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.148 Test: blockdev writev readv block ...passed 00:05:13.148 Test: blockdev writev readv size > 128k ...passed 00:05:13.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.148 Test: blockdev comparev and writev ...passed 00:05:13.148 Test: blockdev nvme passthru rw ...passed 00:05:13.148 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.148 Test: blockdev nvme admin passthru ...passed 00:05:13.148 Test: blockdev copy ...passed 00:05:13.148 Suite: bdevio tests on: Malloc2p7 00:05:13.148 Test: blockdev write read block ...passed 00:05:13.148 Test: blockdev write zeroes read block ...passed 00:05:13.148 Test: blockdev write zeroes read no split ...passed 00:05:13.148 Test: blockdev write zeroes read split ...passed 00:05:13.148 Test: blockdev write zeroes read split partial ...passed 00:05:13.148 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.148 Test: blockdev write read invalid size ...passed 00:05:13.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.148 Test: blockdev write read max offset ...passed 00:05:13.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.148 Test: blockdev writev readv 8 blocks ...passed 00:05:13.148 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.148 Test: blockdev writev readv block ...passed 00:05:13.148 Test: blockdev writev readv size > 128k ...passed 00:05:13.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.148 Test: blockdev comparev and writev ...passed 00:05:13.148 Test: blockdev nvme passthru rw ...passed 00:05:13.148 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.148 Test: blockdev nvme admin passthru ...passed 00:05:13.148 Test: blockdev copy ...passed 00:05:13.148 Suite: bdevio tests on: Malloc2p6 00:05:13.148 Test: blockdev write read block ...passed 00:05:13.148 Test: blockdev write zeroes read block ...passed 00:05:13.148 Test: blockdev write zeroes read no split ...passed 00:05:13.148 Test: blockdev write zeroes read split ...passed 00:05:13.148 Test: blockdev write zeroes read split partial ...passed 00:05:13.148 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.148 Test: blockdev write read invalid size ...passed 00:05:13.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.148 Test: blockdev write read max offset ...passed 00:05:13.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.148 Test: blockdev writev readv 8 blocks ...passed 00:05:13.148 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.148 Test: blockdev writev readv block ...passed 00:05:13.148 Test: blockdev writev readv size > 128k ...passed 00:05:13.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.148 Test: blockdev comparev and writev ...passed 00:05:13.148 Test: blockdev nvme passthru rw ...passed 00:05:13.148 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.148 Test: blockdev nvme admin passthru ...passed 00:05:13.148 Test: blockdev copy ...passed 00:05:13.148 Suite: bdevio tests on: Malloc2p5 00:05:13.148 Test: blockdev write read block ...passed 00:05:13.148 Test: blockdev write zeroes read block ...passed 00:05:13.148 Test: blockdev write zeroes read no split ...passed 00:05:13.148 Test: blockdev write zeroes read split ...passed 00:05:13.148 Test: blockdev write zeroes read split partial ...passed 00:05:13.148 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.148 Test: blockdev write read invalid size ...passed 00:05:13.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.148 Test: blockdev write read max offset ...passed 00:05:13.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.148 Test: blockdev writev readv 8 blocks ...passed 00:05:13.148 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.148 Test: blockdev writev readv block ...passed 00:05:13.148 Test: blockdev writev readv size > 128k ...passed 00:05:13.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.148 Test: blockdev comparev and writev ...passed 00:05:13.148 Test: blockdev nvme passthru rw ...passed 00:05:13.148 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.148 Test: blockdev nvme admin passthru ...passed 00:05:13.148 Test: blockdev copy ...passed 00:05:13.148 Suite: bdevio tests on: Malloc2p4 00:05:13.148 Test: blockdev write read block ...passed 00:05:13.148 Test: blockdev write zeroes read block ...passed 00:05:13.148 Test: blockdev write zeroes read no split ...passed 00:05:13.148 Test: blockdev write zeroes read split ...passed 00:05:13.148 Test: blockdev write zeroes read split partial ...passed 00:05:13.148 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.148 Test: blockdev write read invalid size ...passed 00:05:13.148 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.148 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.148 Test: blockdev write read max offset ...passed 00:05:13.148 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.148 Test: blockdev writev readv 8 blocks ...passed 00:05:13.148 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.148 Test: blockdev writev readv block ...passed 00:05:13.148 Test: blockdev writev readv size > 128k ...passed 00:05:13.148 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.148 Test: blockdev comparev and writev ...passed 00:05:13.148 Test: blockdev nvme passthru rw ...passed 00:05:13.148 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.148 Test: blockdev nvme admin passthru ...passed 00:05:13.148 Test: blockdev copy ...passed 00:05:13.148 Suite: bdevio tests on: Malloc2p3 00:05:13.148 Test: blockdev write read block ...passed 00:05:13.148 Test: blockdev write zeroes read block ...passed 00:05:13.148 Test: blockdev write zeroes read no split ...passed 00:05:13.148 Test: blockdev write zeroes read split ...passed 00:05:13.148 Test: blockdev write zeroes read split partial ...passed 00:05:13.148 Test: blockdev reset ...passed 00:05:13.148 Test: blockdev write read 8 blocks ...passed 00:05:13.148 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc2p2 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc2p1 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc2p0 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc1p1 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc1p0 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 Suite: bdevio tests on: Malloc0 00:05:13.149 Test: blockdev write read block ...passed 00:05:13.149 Test: blockdev write zeroes read block ...passed 00:05:13.149 Test: blockdev write zeroes read no split ...passed 00:05:13.149 Test: blockdev write zeroes read split ...passed 00:05:13.149 Test: blockdev write zeroes read split partial ...passed 00:05:13.149 Test: blockdev reset ...passed 00:05:13.149 Test: blockdev write read 8 blocks ...passed 00:05:13.149 Test: blockdev write read size > 128k ...passed 00:05:13.149 Test: blockdev write read invalid size ...passed 00:05:13.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:13.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:13.149 Test: blockdev write read max offset ...passed 00:05:13.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:13.149 Test: blockdev writev readv 8 blocks ...passed 00:05:13.149 Test: blockdev writev readv 30 x 1block ...passed 00:05:13.149 Test: blockdev writev readv block ...passed 00:05:13.149 Test: blockdev writev readv size > 128k ...passed 00:05:13.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:13.149 Test: blockdev comparev and writev ...passed 00:05:13.149 Test: blockdev nvme passthru rw ...passed 00:05:13.149 Test: blockdev nvme passthru vendor specific ...passed 00:05:13.149 Test: blockdev nvme admin passthru ...passed 00:05:13.149 Test: blockdev copy ...passed 00:05:13.149 00:05:13.149 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.149 suites 16 16 n/a 0 0 00:05:13.149 tests 368 368 368 0 0 00:05:13.149 asserts 2224 2224 2224 0 n/a 00:05:13.149 00:05:13.149 Elapsed time = 0.555 seconds 00:05:13.149 0 00:05:13.149 02:29:59 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 47767 00:05:13.149 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47767 ']' 00:05:13.149 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47767 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47767 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:05:13.150 killing process with pid 47767 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47767' 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47767 00:05:13.150 02:29:59 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47767 00:05:13.410 02:30:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:13.410 00:05:13.410 real 0m1.583s 00:05:13.410 user 0m3.099s 00:05:13.410 sys 0m0.710s 00:05:13.410 02:30:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.410 ************************************ 00:05:13.410 END TEST bdev_bounds 00:05:13.410 ************************************ 00:05:13.410 02:30:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:13.410 02:30:00 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 ************************************ 00:05:13.410 START TEST bdev_nbd 00:05:13.410 ************************************ 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]] 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0 00:05:13.410 00:05:13.410 real 0m0.007s 00:05:13.410 user 0m0.000s 00:05:13.410 sys 0m0.009s 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.410 02:30:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 ************************************ 00:05:13.410 END TEST bdev_nbd 00:05:13.410 ************************************ 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:13.410 02:30:00 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:13.410 02:30:00 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:05:13.410 02:30:00 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:05:13.410 02:30:00 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.410 02:30:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:13.410 ************************************ 00:05:13.410 START TEST bdev_fio 00:05:13.410 ************************************ 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:13.410 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:05:13.410 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.001 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.002 02:30:00 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:14.002 ************************************ 00:05:14.002 START TEST bdev_fio_rw_verify 00:05:14.002 ************************************ 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:14.002 02:30:00 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:14.002 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:14.002 fio-3.35 00:05:14.002 Starting 16 threads 00:05:14.571 EAL: TSC is not safe to use in SMP mode 00:05:14.571 EAL: TSC is not invariant 00:05:26.758 00:05:26.758 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101380: Thu Jul 25 02:30:11 2024 00:05:26.758 read: IOPS=293k, BW=1144MiB/s (1199MB/s)(11.2GiB/10005msec) 00:05:26.758 slat (nsec): min=222, max=391601k, avg=3326.43, stdev=438974.86 00:05:26.758 clat (nsec): min=692, max=391630k, avg=43745.16, stdev=1544338.08 00:05:26.758 lat (nsec): min=1556, max=391631k, avg=47071.60, stdev=1605503.75 00:05:26.758 clat percentiles (usec): 00:05:26.758 | 50.000th=[ 8], 99.000th=[ 766], 99.900th=[ 922], 00:05:26.758 | 99.990th=[ 86508], 99.999th=[244319] 00:05:26.758 write: IOPS=491k, BW=1918MiB/s (2011MB/s)(18.5GiB/9899msec); 0 zone resets 00:05:26.758 slat (nsec): min=383, max=559083k, avg=16658.00, stdev=797283.84 00:05:26.758 clat (nsec): min=645, max=559158k, avg=82703.03, stdev=1740503.98 00:05:26.758 lat (usec): min=9, max=559167, avg=99.36, stdev=1914.34 00:05:26.758 clat percentiles (usec): 00:05:26.758 | 50.000th=[ 41], 99.000th=[ 725], 99.900th=[ 2376], 00:05:26.758 | 99.990th=[ 94897], 99.999th=[164627] 00:05:26.758 bw ( MiB/s): min= 784, max= 3137, per=99.17%, avg=1902.15, stdev=49.69, samples=299 00:05:26.758 iops : min=200954, max=803097, avg=486949.19, stdev=12720.12, samples=299 00:05:26.758 lat (nsec) : 750=0.01%, 1000=0.01% 00:05:26.758 lat (usec) : 2=0.09%, 4=14.65%, 10=18.80%, 20=18.39%, 50=23.05% 00:05:26.758 lat (usec) : 100=23.13%, 250=0.51%, 500=0.10%, 750=0.26%, 1000=0.88% 00:05:26.758 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:26.758 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01% 00:05:26.758 cpu : usr=56.64%, sys=2.87%, ctx=727832, majf=0, minf=665 00:05:26.758 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:26.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:26.758 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:26.758 issued rwts: total=2928893,4860565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:26.758 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:26.758 00:05:26.758 Run status group 0 (all jobs): 00:05:26.758 READ: bw=1144MiB/s (1199MB/s), 1144MiB/s-1144MiB/s (1199MB/s-1199MB/s), io=11.2GiB (12.0GB), run=10005-10005msec 00:05:26.758 WRITE: bw=1918MiB/s (2011MB/s), 1918MiB/s-1918MiB/s (2011MB/s-2011MB/s), io=18.5GiB (19.9GB), run=9899-9899msec 00:05:26.758 ************************************ 00:05:26.758 END TEST bdev_fio_rw_verify 00:05:26.758 ************************************ 00:05:26.758 00:05:26.758 real 0m11.886s 00:05:26.758 user 1m34.065s 00:05:26.758 sys 0m6.253s 00:05:26.758 02:30:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.758 02:30:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:05:26.758 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:26.759 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c8605196-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c8605196-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1a516c43-784b-1858-9c02-500cb2304967"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1a516c43-784b-1858-9c02-500cb2304967",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "11669106-45b5-3757-b7ef-4ac84623c7c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11669106-45b5-3757-b7ef-4ac84623c7c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b9dcf93d-18ac-de5e-8113-9c8b02214f88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b9dcf93d-18ac-de5e-8113-9c8b02214f88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f5c76cb7-5ec9-3150-8f52-a0c301973eff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5c76cb7-5ec9-3150-8f52-a0c301973eff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5d75afe7-85d0-195c-99a2-f46356600f5f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5d75afe7-85d0-195c-99a2-f46356600f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8a4b3d7e-501d-825b-9a07-a43094e270d8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a4b3d7e-501d-825b-9a07-a43094e270d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e5e790ab-541c-aa5c-aa94-75cd0adbc7db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e5e790ab-541c-aa5c-aa94-75cd0adbc7db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5e7a2ed3-60af-9e58-b705-19554a7eef47"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5e7a2ed3-60af-9e58-b705-19554a7eef47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c86e63f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c865334f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c8666bc3-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c86f9400-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c867a43d-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c868dcc7-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c870cc56-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c86a153f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c86be9fd-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c87a91f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c87a91f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:26.759 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:05:26.759 Malloc1p0 00:05:26.759 Malloc1p1 00:05:26.759 Malloc2p0 00:05:26.759 Malloc2p1 00:05:26.759 Malloc2p2 00:05:26.759 Malloc2p3 00:05:26.759 Malloc2p4 00:05:26.759 Malloc2p5 00:05:26.759 Malloc2p6 00:05:26.759 Malloc2p7 00:05:26.759 TestPT 00:05:26.759 raid0 00:05:26.759 concat0 ]] 00:05:26.759 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "c8605196-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c8605196-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1a516c43-784b-1858-9c02-500cb2304967"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1a516c43-784b-1858-9c02-500cb2304967",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "11669106-45b5-3757-b7ef-4ac84623c7c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "11669106-45b5-3757-b7ef-4ac84623c7c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b9dcf93d-18ac-de5e-8113-9c8b02214f88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b9dcf93d-18ac-de5e-8113-9c8b02214f88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "f5c76cb7-5ec9-3150-8f52-a0c301973eff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f5c76cb7-5ec9-3150-8f52-a0c301973eff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b19ef144-d9a0-ef57-a5b6-d9ea3c249bcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a7ea848e-3b72-1d5b-9b68-e38ddf50e9ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "5d75afe7-85d0-195c-99a2-f46356600f5f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5d75afe7-85d0-195c-99a2-f46356600f5f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8a4b3d7e-501d-825b-9a07-a43094e270d8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8a4b3d7e-501d-825b-9a07-a43094e270d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e5e790ab-541c-aa5c-aa94-75cd0adbc7db"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e5e790ab-541c-aa5c-aa94-75cd0adbc7db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2009f8a0-f5ac-1d5e-95d8-9c9c074d4872",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "5e7a2ed3-60af-9e58-b705-19554a7eef47"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5e7a2ed3-60af-9e58-b705-19554a7eef47",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c86e63f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86e63f4-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c865334f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "c8666bc3-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "c86f9400-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c86f9400-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c867a43d-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "c868dcc7-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c870cc56-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c870cc56-4a2d-11ef-9c8e-7947904e2597",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c86a153f-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c86be9fd-4a2d-11ef-9c8e-7947904e2597",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "c87a91f4-4a2d-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "c87a91f4-4a2d-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.761 02:30:12 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:26.761 ************************************ 00:05:26.761 START TEST bdev_fio_trim 00:05:26.761 ************************************ 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:26.761 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:26.762 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:26.762 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:26.762 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:26.762 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:26.762 02:30:12 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:26.762 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:26.762 fio-3.35 00:05:26.762 Starting 14 threads 00:05:26.762 EAL: TSC is not safe to use in SMP mode 00:05:26.762 EAL: TSC is not invariant 00:05:38.991 00:05:38.991 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101399: Thu Jul 25 02:30:23 2024 00:05:38.991 write: IOPS=2881k, BW=11.0GiB/s (11.8GB/s)(110GiB/10001msec); 0 zone resets 00:05:38.991 slat (nsec): min=201, max=1511.8M, avg=1256.51, stdev=455828.53 00:05:38.991 clat (nsec): min=1029, max=1511.9M, avg=13205.80, stdev=1226503.59 00:05:38.991 lat (nsec): min=1608, max=1511.9M, avg=14462.32, stdev=1308468.56 00:05:38.991 clat percentiles (usec): 00:05:38.991 | 50.000th=[ 6], 99.000th=[ 12], 99.900th=[ 947], 99.990th=[ 971], 00:05:38.991 | 99.999th=[94897] 00:05:38.991 bw ( MiB/s): min= 3655, max=17848, per=100.00%, avg=11648.78, stdev=337.17, samples=257 00:05:38.991 iops : min=935678, max=4569124, avg=2982089.16, stdev=86315.02, samples=257 00:05:38.991 trim: IOPS=2881k, BW=11.0GiB/s (11.8GB/s)(110GiB/10001msec); 0 zone resets 00:05:38.991 slat (nsec): min=426, max=1157.5M, avg=1427.94, stdev=330024.14 00:05:38.991 clat (nsec): min=308, max=1511.9M, avg=9496.21, stdev=993727.70 00:05:38.991 lat (nsec): min=1431, max=1511.9M, avg=10924.16, stdev=1047099.84 00:05:38.991 clat percentiles (usec): 00:05:38.991 | 50.000th=[ 7], 99.000th=[ 13], 99.900th=[ 24], 99.990th=[ 39], 00:05:38.991 | 99.999th=[94897] 00:05:38.991 bw ( MiB/s): min= 3655, max=17848, per=100.00%, avg=11648.79, stdev=337.17, samples=257 00:05:38.991 iops : min=935696, max=4569120, avg=2982091.21, stdev=86315.01, samples=257 00:05:38.991 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:05:38.991 lat (usec) : 2=0.83%, 4=27.84%, 10=65.14%, 20=5.84%, 50=0.16% 00:05:38.991 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.14% 00:05:38.991 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:38.991 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:05:38.991 lat (msec) : 2000=0.01% 00:05:38.991 cpu : usr=62.89%, sys=4.76%, ctx=1347030, majf=0, minf=0 00:05:38.991 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:38.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:38.991 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:38.991 issued rwts: total=0,28811076,28811080,0 short=0,0,0,0 dropped=0,0,0,0 00:05:38.991 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:38.991 00:05:38.991 Run status group 0 (all jobs): 00:05:38.991 WRITE: bw=11.0GiB/s (11.8GB/s), 11.0GiB/s-11.0GiB/s (11.8GB/s-11.8GB/s), io=110GiB (118GB), run=10001-10001msec 00:05:38.991 TRIM: bw=11.0GiB/s (11.8GB/s), 11.0GiB/s-11.0GiB/s (11.8GB/s-11.8GB/s), io=110GiB (118GB), run=10001-10001msec 00:05:38.991 00:05:38.991 real 0m11.853s 00:05:38.991 user 1m33.161s 00:05:38.991 sys 0m9.135s 00:05:38.991 02:30:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.991 02:30:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:05:38.991 ************************************ 00:05:38.991 END TEST bdev_fio_trim 00:05:38.991 ************************************ 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:38.991 /home/vagrant/spdk_repo/spdk 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:05:38.991 00:05:38.991 real 0m24.332s 00:05:38.991 user 3m7.401s 00:05:38.991 sys 0m15.758s 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.991 02:30:24 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:38.991 ************************************ 00:05:38.991 END TEST bdev_fio 00:05:38.991 ************************************ 00:05:38.991 02:30:24 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:38.991 02:30:24 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:38.992 02:30:24 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:38.992 02:30:24 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:05:38.992 02:30:24 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.992 02:30:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:38.992 ************************************ 00:05:38.992 START TEST bdev_verify 00:05:38.992 ************************************ 00:05:38.992 02:30:24 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:38.992 [2024-07-25 02:30:24.595940] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:38.992 [2024-07-25 02:30:24.596202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:38.992 EAL: TSC is not safe to use in SMP mode 00:05:38.992 EAL: TSC is not invariant 00:05:38.992 [2024-07-25 02:30:25.016273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.992 [2024-07-25 02:30:25.108383] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:38.992 [2024-07-25 02:30:25.108446] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:38.992 [2024-07-25 02:30:25.110683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.992 [2024-07-25 02:30:25.110685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.992 [2024-07-25 02:30:25.166191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:38.992 [2024-07-25 02:30:25.166237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:38.992 [2024-07-25 02:30:25.174171] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:38.992 [2024-07-25 02:30:25.174190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:38.992 [2024-07-25 02:30:25.182186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:38.992 [2024-07-25 02:30:25.182205] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:38.992 [2024-07-25 02:30:25.182211] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:38.992 [2024-07-25 02:30:25.230189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:38.992 [2024-07-25 02:30:25.230232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.992 [2024-07-25 02:30:25.230239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3cc63c836800 00:05:38.992 [2024-07-25 02:30:25.230245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.992 [2024-07-25 02:30:25.230587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.992 [2024-07-25 02:30:25.230610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:38.992 Running I/O for 5 seconds... 00:05:44.265 00:05:44.265 Latency(us) 00:05:44.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:44.265 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.265 Verification LBA range: start 0x0 length 0x1000 00:05:44.265 Malloc0 : 5.02 6601.54 25.79 0.00 0.00 19348.50 6.89 44783.42 00:05:44.265 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.265 Verification LBA range: start 0x1000 length 0x1000 00:05:44.265 Malloc0 : 5.03 76.35 0.30 0.00 0.00 1673459.07 360.58 2676038.03 00:05:44.265 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.265 Verification LBA range: start 0x0 length 0x800 00:05:44.266 Malloc1p0 : 5.02 7219.77 28.20 0.00 0.00 17718.72 244.55 21020.79 00:05:44.266 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x800 length 0x800 00:05:44.266 Malloc1p0 : 5.01 8094.28 31.62 0.00 0.00 15803.91 233.84 22048.98 00:05:44.266 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x800 00:05:44.266 Malloc1p1 : 5.02 7219.20 28.20 0.00 0.00 17716.96 216.88 20563.82 00:05:44.266 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x800 length 0x800 00:05:44.266 Malloc1p1 : 5.01 8093.89 31.62 0.00 0.00 15801.75 228.49 19764.11 00:05:44.266 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p0 : 5.02 7218.85 28.20 0.00 0.00 17715.58 226.70 21363.52 00:05:44.266 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p0 : 5.01 8093.52 31.62 0.00 0.00 15800.43 224.02 17022.27 00:05:44.266 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p1 : 5.02 7218.48 28.20 0.00 0.00 17713.76 230.27 21363.52 00:05:44.266 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p1 : 5.01 8092.97 31.61 0.00 0.00 15799.22 223.13 17821.97 00:05:44.266 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p2 : 5.02 7218.11 28.20 0.00 0.00 17711.94 233.84 20792.30 00:05:44.266 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p2 : 5.01 8092.57 31.61 0.00 0.00 15797.66 244.55 18507.44 00:05:44.266 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p3 : 5.02 7217.75 28.19 0.00 0.00 17709.92 230.27 20221.09 00:05:44.266 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p3 : 5.01 8092.19 31.61 0.00 0.00 15795.84 227.59 19192.90 00:05:44.266 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p4 : 5.02 7217.39 28.19 0.00 0.00 17708.05 228.49 19764.11 00:05:44.266 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p4 : 5.01 8091.80 31.61 0.00 0.00 15794.20 223.13 19992.60 00:05:44.266 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p5 : 5.02 7217.04 28.19 0.00 0.00 17706.34 228.49 19307.14 00:05:44.266 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p5 : 5.01 8091.41 31.61 0.00 0.00 15792.62 222.24 20678.06 00:05:44.266 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p6 : 5.02 7216.71 28.19 0.00 0.00 17704.51 225.81 15537.11 00:05:44.266 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p6 : 5.01 8091.00 31.61 0.00 0.00 15790.96 225.81 21477.76 00:05:44.266 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x200 00:05:44.266 Malloc2p7 : 5.02 7216.38 28.19 0.00 0.00 17702.84 233.84 16679.54 00:05:44.266 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x200 length 0x200 00:05:44.266 Malloc2p7 : 5.02 8090.64 31.60 0.00 0.00 15789.51 232.06 22277.47 00:05:44.266 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x1000 00:05:44.266 TestPT : 5.02 7143.48 27.90 0.00 0.00 17871.66 1042.47 18850.17 00:05:44.266 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x1000 length 0x1000 00:05:44.266 TestPT : 5.03 3333.20 13.02 0.00 0.00 38301.84 1049.61 92765.66 00:05:44.266 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x2000 00:05:44.266 raid0 : 5.02 7215.87 28.19 0.00 0.00 17696.96 235.63 17022.27 00:05:44.266 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x2000 length 0x2000 00:05:44.266 raid0 : 5.02 8090.28 31.60 0.00 0.00 15783.47 239.20 22163.22 00:05:44.266 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x2000 00:05:44.266 concat0 : 5.02 7215.52 28.19 0.00 0.00 17694.84 235.63 18850.17 00:05:44.266 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x2000 length 0x2000 00:05:44.266 concat0 : 5.02 8089.91 31.60 0.00 0.00 15781.95 233.84 22734.44 00:05:44.266 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x1000 00:05:44.266 raid1 : 5.02 7215.17 28.18 0.00 0.00 17692.65 317.74 21135.03 00:05:44.266 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x1000 length 0x1000 00:05:44.266 raid1 : 5.02 8089.54 31.60 0.00 0.00 15779.75 317.74 23305.66 00:05:44.266 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x0 length 0x4e2 00:05:44.266 AIO0 : 5.13 530.35 2.07 0.00 0.00 237144.19 13537.85 294291.07 00:05:44.266 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:44.266 Verification LBA range: start 0x4e2 length 0x4e2 00:05:44.266 AIO0 : 5.13 535.09 2.09 0.00 0.00 235401.48 11595.71 299774.75 00:05:44.266 =================================================================================================================== 00:05:44.266 Total : 217240.25 848.59 0.00 0.00 18832.14 6.89 2676038.03 00:05:44.266 00:05:44.266 real 0m6.118s 00:05:44.266 user 0m10.254s 00:05:44.266 sys 0m0.461s 00:05:44.266 02:30:30 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.266 02:30:30 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.266 ************************************ 00:05:44.266 END TEST bdev_verify 00:05:44.266 ************************************ 00:05:44.266 02:30:30 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:44.266 02:30:30 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:44.266 02:30:30 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:05:44.266 02:30:30 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.266 02:30:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:44.266 ************************************ 00:05:44.266 START TEST bdev_verify_big_io 00:05:44.266 ************************************ 00:05:44.266 02:30:30 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:44.266 [2024-07-25 02:30:30.769953] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:44.266 [2024-07-25 02:30:30.770288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:44.525 EAL: TSC is not safe to use in SMP mode 00:05:44.525 EAL: TSC is not invariant 00:05:44.525 [2024-07-25 02:30:31.189571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.525 [2024-07-25 02:30:31.283076] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:44.525 [2024-07-25 02:30:31.283108] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:44.525 [2024-07-25 02:30:31.285409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.525 [2024-07-25 02:30:31.285408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.525 [2024-07-25 02:30:31.340858] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:44.525 [2024-07-25 02:30:31.340885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:44.525 [2024-07-25 02:30:31.348849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:44.525 [2024-07-25 02:30:31.348864] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:44.525 [2024-07-25 02:30:31.356860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:44.525 [2024-07-25 02:30:31.356875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:44.525 [2024-07-25 02:30:31.356881] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:44.525 [2024-07-25 02:30:31.404867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:44.525 [2024-07-25 02:30:31.404895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.525 [2024-07-25 02:30:31.404902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32a18c036800 00:05:44.525 [2024-07-25 02:30:31.404907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.525 [2024-07-25 02:30:31.405179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.525 [2024-07-25 02:30:31.405196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:44.785 [2024-07-25 02:30:31.505513] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505616] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505681] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505745] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505799] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505856] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505912] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.505993] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506051] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506122] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506193] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506265] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506326] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506395] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506456] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.506515] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:44.785 [2024-07-25 02:30:31.507339] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:44.785 [2024-07-25 02:30:31.507441] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:44.785 Running I/O for 5 seconds... 00:05:50.066 00:05:50.066 Latency(us) 00:05:50.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:50.066 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x100 00:05:50.066 Malloc0 : 5.05 4663.12 291.44 0.00 0.00 27376.06 58.91 70830.92 00:05:50.066 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x100 length 0x100 00:05:50.066 Malloc0 : 5.04 4643.33 290.21 0.00 0.00 27492.73 52.21 88652.90 00:05:50.066 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x80 00:05:50.066 Malloc1p0 : 5.07 1180.88 73.80 0.00 0.00 107948.54 531.95 150801.32 00:05:50.066 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x80 length 0x80 00:05:50.066 Malloc1p0 : 5.06 1616.03 101.00 0.00 0.00 78877.59 664.04 113786.45 00:05:50.066 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x80 00:05:50.066 Malloc1p1 : 5.08 601.93 37.62 0.00 0.00 211453.01 317.74 254077.38 00:05:50.066 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x80 length 0x80 00:05:50.066 Malloc1p1 : 5.07 602.34 37.65 0.00 0.00 211346.38 326.66 244937.91 00:05:50.066 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p0 : 5.06 585.33 36.58 0.00 0.00 54340.05 226.70 89109.87 00:05:50.066 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p0 : 5.06 585.47 36.59 0.00 0.00 54327.37 228.49 81341.32 00:05:50.066 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p1 : 5.06 585.29 36.58 0.00 0.00 54326.47 227.59 88652.90 00:05:50.066 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p1 : 5.06 585.44 36.59 0.00 0.00 54309.67 223.13 80427.37 00:05:50.066 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p2 : 5.06 585.25 36.58 0.00 0.00 54302.06 228.49 87738.95 00:05:50.066 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p2 : 5.06 585.40 36.59 0.00 0.00 54297.79 222.24 79970.40 00:05:50.066 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p3 : 5.06 585.22 36.58 0.00 0.00 54289.67 239.20 86825.00 00:05:50.066 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p3 : 5.06 585.37 36.59 0.00 0.00 54278.73 235.63 79056.45 00:05:50.066 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p4 : 5.06 585.19 36.57 0.00 0.00 54278.79 224.92 86368.03 00:05:50.066 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p4 : 5.06 585.34 36.58 0.00 0.00 54258.85 223.13 78599.48 00:05:50.066 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p5 : 5.06 585.15 36.57 0.00 0.00 54257.13 226.70 85911.06 00:05:50.066 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p5 : 5.06 585.30 36.58 0.00 0.00 54241.31 226.70 77685.53 00:05:50.066 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p6 : 5.06 585.12 36.57 0.00 0.00 54242.30 227.59 84997.11 00:05:50.066 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p6 : 5.06 585.26 36.58 0.00 0.00 54222.13 228.49 77228.56 00:05:50.066 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x20 00:05:50.066 Malloc2p7 : 5.06 585.09 36.57 0.00 0.00 54231.31 226.70 84540.14 00:05:50.066 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x20 length 0x20 00:05:50.066 Malloc2p7 : 5.06 585.23 36.58 0.00 0.00 54208.15 222.24 76314.61 00:05:50.066 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x100 00:05:50.066 TestPT : 5.11 591.42 36.96 0.00 0.00 213238.54 5398.00 218433.43 00:05:50.066 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x100 length 0x100 00:05:50.066 TestPT : 5.20 178.82 11.18 0.00 0.00 704666.14 5369.44 815241.09 00:05:50.066 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x200 00:05:50.066 raid0 : 5.08 604.95 37.81 0.00 0.00 209403.27 342.73 239454.22 00:05:50.066 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x200 length 0x200 00:05:50.066 raid0 : 5.07 605.35 37.83 0.00 0.00 209291.45 357.01 227572.91 00:05:50.066 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x200 00:05:50.066 concat0 : 5.08 604.92 37.81 0.00 0.00 209116.72 365.94 233056.59 00:05:50.066 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x200 length 0x200 00:05:50.066 concat0 : 5.07 605.32 37.83 0.00 0.00 209011.68 374.86 222089.22 00:05:50.066 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x100 00:05:50.066 raid1 : 5.08 608.27 38.02 0.00 0.00 207716.67 421.27 225745.01 00:05:50.066 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x100 length 0x100 00:05:50.066 raid1 : 5.07 608.72 38.05 0.00 0.00 207610.38 435.55 214777.64 00:05:50.066 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x0 length 0x4e 00:05:50.066 AIO0 : 5.08 601.75 37.61 0.00 0.00 127812.97 415.92 138006.06 00:05:50.066 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:05:50.066 Verification LBA range: start 0x4e length 0x4e 00:05:50.066 AIO0 : 5.07 601.83 37.61 0.00 0.00 127776.49 549.80 128866.59 00:05:50.066 =================================================================================================================== 00:05:50.066 Total : 28283.43 1767.71 0.00 0.00 86469.87 52.21 815241.09 00:05:50.326 00:05:50.326 real 0m6.199s 00:05:50.326 user 0m11.223s 00:05:50.326 sys 0m0.519s 00:05:50.326 02:30:36 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.326 02:30:36 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 ************************************ 00:05:50.326 END TEST bdev_verify_big_io 00:05:50.326 ************************************ 00:05:50.326 02:30:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:50.326 02:30:37 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.326 02:30:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:50.326 02:30:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.326 02:30:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 ************************************ 00:05:50.326 START TEST bdev_write_zeroes 00:05:50.326 ************************************ 00:05:50.326 02:30:37 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:50.326 [2024-07-25 02:30:37.030128] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:50.326 [2024-07-25 02:30:37.030459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:50.585 EAL: TSC is not safe to use in SMP mode 00:05:50.586 EAL: TSC is not invariant 00:05:50.586 [2024-07-25 02:30:37.449941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.845 [2024-07-25 02:30:37.542829] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:50.845 [2024-07-25 02:30:37.544534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.845 [2024-07-25 02:30:37.599793] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:50.845 [2024-07-25 02:30:37.599823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:50.845 [2024-07-25 02:30:37.607785] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:50.845 [2024-07-25 02:30:37.607798] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:50.845 [2024-07-25 02:30:37.615795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:50.845 [2024-07-25 02:30:37.615808] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:50.845 [2024-07-25 02:30:37.615813] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:50.845 [2024-07-25 02:30:37.663801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:50.845 [2024-07-25 02:30:37.663828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.845 [2024-07-25 02:30:37.663835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2b317ee36800 00:05:50.845 [2024-07-25 02:30:37.663841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.845 [2024-07-25 02:30:37.664130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.845 [2024-07-25 02:30:37.664148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:51.104 Running I/O for 1 seconds... 00:05:52.042 00:05:52.042 Latency(us) 00:05:52.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:52.042 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc0 : 1.00 37575.02 146.78 0.00 0.00 3405.96 141.02 6254.83 00:05:52.042 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc1p0 : 1.01 37571.50 146.76 0.00 0.00 3405.32 166.90 6112.02 00:05:52.042 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc1p1 : 1.01 37568.33 146.75 0.00 0.00 3404.68 166.90 5997.78 00:05:52.042 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p0 : 1.01 37565.60 146.74 0.00 0.00 3403.56 163.33 5883.54 00:05:52.042 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p1 : 1.01 37562.87 146.73 0.00 0.00 3402.53 164.22 5769.29 00:05:52.042 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p2 : 1.01 37559.69 146.72 0.00 0.00 3401.64 166.01 5712.17 00:05:52.042 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p3 : 1.01 37555.10 146.70 0.00 0.00 3401.03 161.55 5569.37 00:05:52.042 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p4 : 1.01 37552.30 146.69 0.00 0.00 3399.96 165.12 5426.56 00:05:52.042 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p5 : 1.01 37544.42 146.66 0.00 0.00 3399.43 160.65 5312.32 00:05:52.042 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p6 : 1.01 37541.62 146.65 0.00 0.00 3398.47 159.76 5255.20 00:05:52.042 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 Malloc2p7 : 1.01 37538.93 146.64 0.00 0.00 3397.47 161.55 5198.08 00:05:52.042 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 TestPT : 1.01 37536.17 146.63 0.00 0.00 3397.02 160.65 5083.83 00:05:52.042 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 raid0 : 1.01 37616.56 146.94 0.00 0.00 3387.88 203.50 5140.95 00:05:52.042 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 concat0 : 1.01 37612.53 146.92 0.00 0.00 3387.16 200.82 5112.39 00:05:52.042 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 raid1 : 1.01 37607.49 146.90 0.00 0.00 3385.83 367.72 4941.03 00:05:52.042 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:52.042 AIO0 : 1.09 1594.42 6.23 0.00 0.00 76791.58 706.88 222089.22 00:05:52.042 =================================================================================================================== 00:05:52.042 Total : 565102.54 2207.43 0.00 0.00 3622.12 141.02 222089.22 00:05:52.301 00:05:52.301 real 0m2.058s 00:05:52.301 user 0m1.460s 00:05:52.301 sys 0m0.485s 00:05:52.301 02:30:39 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.301 02:30:39 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 ************************************ 00:05:52.301 END TEST bdev_write_zeroes 00:05:52.301 ************************************ 00:05:52.301 02:30:39 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:52.301 02:30:39 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:52.301 02:30:39 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:52.301 02:30:39 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.301 02:30:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:52.301 ************************************ 00:05:52.301 START TEST bdev_json_nonenclosed 00:05:52.301 ************************************ 00:05:52.301 02:30:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:52.301 [2024-07-25 02:30:39.151337] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:52.301 [2024-07-25 02:30:39.151677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:52.869 EAL: TSC is not safe to use in SMP mode 00:05:52.869 EAL: TSC is not invariant 00:05:52.869 [2024-07-25 02:30:39.569002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.869 [2024-07-25 02:30:39.659867] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:52.869 [2024-07-25 02:30:39.661586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.869 [2024-07-25 02:30:39.661620] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:52.869 [2024-07-25 02:30:39.661627] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:52.869 [2024-07-25 02:30:39.661633] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.129 00:05:53.129 real 0m0.632s 00:05:53.129 user 0m0.179s 00:05:53.129 sys 0m0.450s 00:05:53.129 02:30:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:05:53.129 02:30:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.129 02:30:39 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 END TEST bdev_json_nonenclosed 00:05:53.129 ************************************ 00:05:53.129 02:30:39 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:05:53.129 02:30:39 blockdev_general -- bdev/blockdev.sh@781 -- # true 00:05:53.129 02:30:39 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:53.129 02:30:39 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.129 02:30:39 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.129 02:30:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 START TEST bdev_json_nonarray 00:05:53.129 ************************************ 00:05:53.129 02:30:39 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:53.129 [2024-07-25 02:30:39.839978] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:53.129 [2024-07-25 02:30:39.840304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:53.389 EAL: TSC is not safe to use in SMP mode 00:05:53.389 EAL: TSC is not invariant 00:05:53.389 [2024-07-25 02:30:40.261898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.649 [2024-07-25 02:30:40.349116] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:53.649 [2024-07-25 02:30:40.350789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.649 [2024-07-25 02:30:40.350838] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:53.649 [2024-07-25 02:30:40.350859] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:53.649 [2024-07-25 02:30:40.350865] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.649 00:05:53.649 real 0m0.632s 00:05:53.649 user 0m0.167s 00:05:53.649 sys 0m0.458s 00:05:53.649 02:30:40 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:05:53.649 02:30:40 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.649 02:30:40 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 ************************************ 00:05:53.649 END TEST bdev_json_nonarray 00:05:53.649 ************************************ 00:05:53.649 02:30:40 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:05:53.649 02:30:40 blockdev_general -- bdev/blockdev.sh@784 -- # true 00:05:53.649 02:30:40 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:05:53.649 02:30:40 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:05:53.649 02:30:40 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:53.649 02:30:40 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.649 02:30:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:53.649 ************************************ 00:05:53.649 START TEST bdev_qos 00:05:53.649 ************************************ 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:05:53.649 Process qos testing pid: 48168 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=48168 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 48168' 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 48168 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48168 ']' 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.649 02:30:40 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:53.909 [2024-07-25 02:30:40.534952] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:05:53.909 [2024-07-25 02:30:40.535206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:54.168 EAL: TSC is not safe to use in SMP mode 00:05:54.168 EAL: TSC is not invariant 00:05:54.168 [2024-07-25 02:30:40.950761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.168 [2024-07-25 02:30:41.043202] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:54.168 [2024-07-25 02:30:41.044847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.737 Malloc_0 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.737 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.737 [ 00:05:54.737 { 00:05:54.737 "name": "Malloc_0", 00:05:54.737 "aliases": [ 00:05:54.737 "e302b38a-4a2d-11ef-9c8e-7947904e2597" 00:05:54.738 ], 00:05:54.738 "product_name": "Malloc disk", 00:05:54.738 "block_size": 512, 00:05:54.738 "num_blocks": 262144, 00:05:54.738 "uuid": "e302b38a-4a2d-11ef-9c8e-7947904e2597", 00:05:54.738 "assigned_rate_limits": { 00:05:54.738 "rw_ios_per_sec": 0, 00:05:54.738 "rw_mbytes_per_sec": 0, 00:05:54.738 "r_mbytes_per_sec": 0, 00:05:54.738 "w_mbytes_per_sec": 0 00:05:54.738 }, 00:05:54.738 "claimed": false, 00:05:54.738 "zoned": false, 00:05:54.738 "supported_io_types": { 00:05:54.738 "read": true, 00:05:54.738 "write": true, 00:05:54.738 "unmap": true, 00:05:54.738 "flush": true, 00:05:54.738 "reset": true, 00:05:54.738 "nvme_admin": false, 00:05:54.738 "nvme_io": false, 00:05:54.738 "nvme_io_md": false, 00:05:54.738 "write_zeroes": true, 00:05:54.738 "zcopy": true, 00:05:54.738 "get_zone_info": false, 00:05:54.738 "zone_management": false, 00:05:54.738 "zone_append": false, 00:05:54.738 "compare": false, 00:05:54.738 "compare_and_write": false, 00:05:54.738 "abort": true, 00:05:54.738 "seek_hole": false, 00:05:54.738 "seek_data": false, 00:05:54.738 "copy": true, 00:05:54.738 "nvme_iov_md": false 00:05:54.738 }, 00:05:54.738 "memory_domains": [ 00:05:54.738 { 00:05:54.738 "dma_device_id": "system", 00:05:54.738 "dma_device_type": 1 00:05:54.738 }, 00:05:54.738 { 00:05:54.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.738 "dma_device_type": 2 00:05:54.738 } 00:05:54.738 ], 00:05:54.738 "driver_specific": {} 00:05:54.738 } 00:05:54.738 ] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.738 Null_1 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:54.738 [ 00:05:54.738 { 00:05:54.738 "name": "Null_1", 00:05:54.738 "aliases": [ 00:05:54.738 "e30969af-4a2d-11ef-9c8e-7947904e2597" 00:05:54.738 ], 00:05:54.738 "product_name": "Null disk", 00:05:54.738 "block_size": 512, 00:05:54.738 "num_blocks": 262144, 00:05:54.738 "uuid": "e30969af-4a2d-11ef-9c8e-7947904e2597", 00:05:54.738 "assigned_rate_limits": { 00:05:54.738 "rw_ios_per_sec": 0, 00:05:54.738 "rw_mbytes_per_sec": 0, 00:05:54.738 "r_mbytes_per_sec": 0, 00:05:54.738 "w_mbytes_per_sec": 0 00:05:54.738 }, 00:05:54.738 "claimed": false, 00:05:54.738 "zoned": false, 00:05:54.738 "supported_io_types": { 00:05:54.738 "read": true, 00:05:54.738 "write": true, 00:05:54.738 "unmap": false, 00:05:54.738 "flush": false, 00:05:54.738 "reset": true, 00:05:54.738 "nvme_admin": false, 00:05:54.738 "nvme_io": false, 00:05:54.738 "nvme_io_md": false, 00:05:54.738 "write_zeroes": true, 00:05:54.738 "zcopy": false, 00:05:54.738 "get_zone_info": false, 00:05:54.738 "zone_management": false, 00:05:54.738 "zone_append": false, 00:05:54.738 "compare": false, 00:05:54.738 "compare_and_write": false, 00:05:54.738 "abort": true, 00:05:54.738 "seek_hole": false, 00:05:54.738 "seek_data": false, 00:05:54.738 "copy": false, 00:05:54.738 "nvme_iov_md": false 00:05:54.738 }, 00:05:54.738 "driver_specific": {} 00:05:54.738 } 00:05:54.738 ] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:05:54.738 02:30:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:05:54.738 Running I/O for 60 seconds... 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 746668.96 2986675.86 0.00 0.00 3229696.00 0.00 0.00 ' 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=746668.96 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 746668 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=746668 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=186000 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 186000 -gt 1000 ']' 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 186000 Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 186000 IOPS Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.342 02:30:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:01.342 ************************************ 00:06:01.342 START TEST bdev_qos_iops 00:06:01.342 ************************************ 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 186000 IOPS Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=186000 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:06:01.342 02:30:47 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:06:06.629 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 186040.79 744163.17 0.00 0.00 803520.00 0.00 0.00 ' 00:06:06.629 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:06:06.629 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:06:06.629 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=186040.79 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 186040 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=186040 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=167400 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=204600 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 186040 -lt 167400 ']' 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 186040 -gt 204600 ']' 00:06:06.630 00:06:06.630 real 0m5.517s 00:06:06.630 user 0m0.134s 00:06:06.630 sys 0m0.010s 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.630 02:30:52 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:06.630 ************************************ 00:06:06.630 END TEST bdev_qos_iops 00:06:06.630 ************************************ 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:06:06.630 02:30:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 775951.95 3103807.82 0.00 0.00 3351552.00 0.00 0.00 ' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=3351552.00 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 3351552 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=3351552 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=327 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 327 -lt 2 ']' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 327 Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 327 BANDWIDTH Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.943 02:30:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:11.943 ************************************ 00:06:11.943 START TEST bdev_qos_bw 00:06:11.943 ************************************ 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 327 BANDWIDTH Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=327 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:06:11.943 02:30:58 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 83742.42 334969.66 0.00 0.00 360632.00 0.00 0.00 ' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=360632.00 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 360632 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=360632 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=334848 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=301363 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=368332 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 360632 -lt 301363 ']' 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 360632 -gt 368332 ']' 00:06:17.224 00:06:17.224 real 0m5.492s 00:06:17.224 user 0m0.086s 00:06:17.224 sys 0m0.049s 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:17.224 ************************************ 00:06:17.224 END TEST bdev_qos_bw 00:06:17.224 ************************************ 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.224 02:31:03 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:17.224 ************************************ 00:06:17.224 START TEST bdev_qos_ro_bw 00:06:17.224 ************************************ 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:17.224 02:31:03 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.31 2049.26 0.00 0.00 2148.00 0.00 0.00 ' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2148.00 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2148 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2148 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2148 -lt 1843 ']' 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2148 -gt 2252 ']' 00:06:22.531 00:06:22.531 real 0m5.445s 00:06:22.531 user 0m0.116s 00:06:22.531 sys 0m0.024s 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.531 02:31:09 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:22.531 ************************************ 00:06:22.531 END TEST bdev_qos_ro_bw 00:06:22.531 ************************************ 00:06:22.531 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:22.531 02:31:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:22.531 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.531 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:23.101 00:06:23.101 Latency(us) 00:06:23.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.101 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:23.101 Malloc_0 : 28.06 254846.57 995.49 0.00 0.00 995.08 321.31 504498.97 00:06:23.101 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:23.101 Null_1 : 28.08 484136.46 1891.16 0.00 0.00 528.58 51.77 21592.01 00:06:23.101 =================================================================================================================== 00:06:23.101 Total : 738983.03 2886.65 0.00 0.00 689.34 51.77 504498.97 00:06:23.101 0 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 48168 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48168 ']' 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48168 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48168 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:23.101 killing process with pid 48168 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48168' 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48168 00:06:23.101 Received shutdown signal, test time was about 28.106527 seconds 00:06:23.101 00:06:23.101 Latency(us) 00:06:23.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.101 =================================================================================================================== 00:06:23.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48168 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:06:23.101 00:06:23.101 real 0m29.361s 00:06:23.101 user 0m29.946s 00:06:23.101 sys 0m0.790s 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.101 02:31:09 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:23.101 ************************************ 00:06:23.101 END TEST bdev_qos 00:06:23.101 ************************************ 00:06:23.101 02:31:09 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:23.101 02:31:09 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:23.101 02:31:09 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:23.101 02:31:09 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.102 02:31:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:23.102 ************************************ 00:06:23.102 START TEST bdev_qd_sampling 00:06:23.102 ************************************ 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:06:23.102 Process bdev QD sampling period testing pid: 48389 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=48389 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 48389' 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 48389 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48389 ']' 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.102 02:31:09 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:23.102 [2024-07-25 02:31:09.950927] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:23.102 [2024-07-25 02:31:09.951267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:23.670 EAL: TSC is not safe to use in SMP mode 00:06:23.670 EAL: TSC is not invariant 00:06:23.670 [2024-07-25 02:31:10.371880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.670 [2024-07-25 02:31:10.463620] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:23.670 [2024-07-25 02:31:10.463646] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:23.670 [2024-07-25 02:31:10.465854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.670 [2024-07-25 02:31:10.465855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:24.240 Malloc_QD 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:24.240 [ 00:06:24.240 { 00:06:24.240 "name": "Malloc_QD", 00:06:24.240 "aliases": [ 00:06:24.240 "f48c95af-4a2d-11ef-9c8e-7947904e2597" 00:06:24.240 ], 00:06:24.240 "product_name": "Malloc disk", 00:06:24.240 "block_size": 512, 00:06:24.240 "num_blocks": 262144, 00:06:24.240 "uuid": "f48c95af-4a2d-11ef-9c8e-7947904e2597", 00:06:24.240 "assigned_rate_limits": { 00:06:24.240 "rw_ios_per_sec": 0, 00:06:24.240 "rw_mbytes_per_sec": 0, 00:06:24.240 "r_mbytes_per_sec": 0, 00:06:24.240 "w_mbytes_per_sec": 0 00:06:24.240 }, 00:06:24.240 "claimed": false, 00:06:24.240 "zoned": false, 00:06:24.240 "supported_io_types": { 00:06:24.240 "read": true, 00:06:24.240 "write": true, 00:06:24.240 "unmap": true, 00:06:24.240 "flush": true, 00:06:24.240 "reset": true, 00:06:24.240 "nvme_admin": false, 00:06:24.240 "nvme_io": false, 00:06:24.240 "nvme_io_md": false, 00:06:24.240 "write_zeroes": true, 00:06:24.240 "zcopy": true, 00:06:24.240 "get_zone_info": false, 00:06:24.240 "zone_management": false, 00:06:24.240 "zone_append": false, 00:06:24.240 "compare": false, 00:06:24.240 "compare_and_write": false, 00:06:24.240 "abort": true, 00:06:24.240 "seek_hole": false, 00:06:24.240 "seek_data": false, 00:06:24.240 "copy": true, 00:06:24.240 "nvme_iov_md": false 00:06:24.240 }, 00:06:24.240 "memory_domains": [ 00:06:24.240 { 00:06:24.240 "dma_device_id": "system", 00:06:24.240 "dma_device_type": 1 00:06:24.240 }, 00:06:24.240 { 00:06:24.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.240 "dma_device_type": 2 00:06:24.240 } 00:06:24.240 ], 00:06:24.240 "driver_specific": {} 00:06:24.240 } 00:06:24.240 ] 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:06:24.240 02:31:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:24.240 Running I/O for 5 seconds... 00:06:26.147 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:06:26.147 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:06:26.147 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:06:26.405 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:06:26.405 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:26.405 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.405 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:26.405 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:06:26.406 "tick_rate": 2294609042, 00:06:26.406 "ticks": 714596883440, 00:06:26.406 "bdevs": [ 00:06:26.406 { 00:06:26.406 "name": "Malloc_QD", 00:06:26.406 "bytes_read": 14674858496, 00:06:26.406 "num_read_ops": 3582723, 00:06:26.406 "bytes_written": 0, 00:06:26.406 "num_write_ops": 0, 00:06:26.406 "bytes_unmapped": 0, 00:06:26.406 "num_unmap_ops": 0, 00:06:26.406 "bytes_copied": 0, 00:06:26.406 "num_copy_ops": 0, 00:06:26.406 "read_latency_ticks": 2422345785558, 00:06:26.406 "max_read_latency_ticks": 904580, 00:06:26.406 "min_read_latency_ticks": 35816, 00:06:26.406 "write_latency_ticks": 0, 00:06:26.406 "max_write_latency_ticks": 0, 00:06:26.406 "min_write_latency_ticks": 0, 00:06:26.406 "unmap_latency_ticks": 0, 00:06:26.406 "max_unmap_latency_ticks": 0, 00:06:26.406 "min_unmap_latency_ticks": 0, 00:06:26.406 "copy_latency_ticks": 0, 00:06:26.406 "max_copy_latency_ticks": 0, 00:06:26.406 "min_copy_latency_ticks": 0, 00:06:26.406 "io_error": {}, 00:06:26.406 "queue_depth_polling_period": 10, 00:06:26.406 "queue_depth": 512, 00:06:26.406 "io_time": 400, 00:06:26.406 "weighted_io_time": 204800 00:06:26.406 } 00:06:26.406 ] 00:06:26.406 }' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:26.406 00:06:26.406 Latency(us) 00:06:26.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.406 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:26.406 Malloc_QD : 2.09 861521.23 3365.32 0.00 0.00 296.93 48.42 394.50 00:06:26.406 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:26.406 Malloc_QD : 2.09 874090.43 3414.42 0.00 0.00 292.66 44.18 365.94 00:06:26.406 =================================================================================================================== 00:06:26.406 Total : 1735611.66 6779.73 0.00 0.00 294.78 44.18 394.50 00:06:26.406 0 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 48389 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48389 ']' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48389 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48389 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:26.406 killing process with pid 48389 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48389' 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48389 00:06:26.406 Received shutdown signal, test time was about 2.128385 seconds 00:06:26.406 00:06:26.406 Latency(us) 00:06:26.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.406 =================================================================================================================== 00:06:26.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48389 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:06:26.406 00:06:26.406 real 0m3.339s 00:06:26.406 user 0m6.022s 00:06:26.406 sys 0m0.552s 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.406 02:31:13 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:26.406 ************************************ 00:06:26.406 END TEST bdev_qd_sampling 00:06:26.406 ************************************ 00:06:26.663 02:31:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:26.663 02:31:13 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:06:26.663 02:31:13 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.663 02:31:13 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.663 02:31:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 ************************************ 00:06:26.663 START TEST bdev_error 00:06:26.663 ************************************ 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=48432 00:06:26.663 Process error testing pid: 48432 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 48432' 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:26.663 02:31:13 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 48432 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48432 ']' 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.663 02:31:13 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.663 [2024-07-25 02:31:13.343175] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:26.663 [2024-07-25 02:31:13.343478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:26.919 EAL: TSC is not safe to use in SMP mode 00:06:26.919 EAL: TSC is not invariant 00:06:26.919 [2024-07-25 02:31:13.764174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.178 [2024-07-25 02:31:13.855349] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:27.178 [2024-07-25 02:31:13.856971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:06:27.438 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 Dev_1 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 [ 00:06:27.438 { 00:06:27.438 "name": "Dev_1", 00:06:27.438 "aliases": [ 00:06:27.438 "f692a1f5-4a2d-11ef-9c8e-7947904e2597" 00:06:27.438 ], 00:06:27.438 "product_name": "Malloc disk", 00:06:27.438 "block_size": 512, 00:06:27.438 "num_blocks": 262144, 00:06:27.438 "uuid": "f692a1f5-4a2d-11ef-9c8e-7947904e2597", 00:06:27.438 "assigned_rate_limits": { 00:06:27.438 "rw_ios_per_sec": 0, 00:06:27.438 "rw_mbytes_per_sec": 0, 00:06:27.438 "r_mbytes_per_sec": 0, 00:06:27.438 "w_mbytes_per_sec": 0 00:06:27.438 }, 00:06:27.438 "claimed": false, 00:06:27.438 "zoned": false, 00:06:27.438 "supported_io_types": { 00:06:27.438 "read": true, 00:06:27.438 "write": true, 00:06:27.438 "unmap": true, 00:06:27.438 "flush": true, 00:06:27.438 "reset": true, 00:06:27.438 "nvme_admin": false, 00:06:27.438 "nvme_io": false, 00:06:27.438 "nvme_io_md": false, 00:06:27.438 "write_zeroes": true, 00:06:27.438 "zcopy": true, 00:06:27.438 "get_zone_info": false, 00:06:27.438 "zone_management": false, 00:06:27.438 "zone_append": false, 00:06:27.438 "compare": false, 00:06:27.438 "compare_and_write": false, 00:06:27.438 "abort": true, 00:06:27.438 "seek_hole": false, 00:06:27.438 "seek_data": false, 00:06:27.438 "copy": true, 00:06:27.438 "nvme_iov_md": false 00:06:27.438 }, 00:06:27.438 "memory_domains": [ 00:06:27.438 { 00:06:27.438 "dma_device_id": "system", 00:06:27.438 "dma_device_type": 1 00:06:27.438 }, 00:06:27.438 { 00:06:27.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.438 "dma_device_type": 2 00:06:27.438 } 00:06:27.438 ], 00:06:27.438 "driver_specific": {} 00:06:27.438 } 00:06:27.438 ] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:27.438 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 true 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.438 Dev_2 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.438 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.698 [ 00:06:27.698 { 00:06:27.698 "name": "Dev_2", 00:06:27.698 "aliases": [ 00:06:27.698 "f699f4b8-4a2d-11ef-9c8e-7947904e2597" 00:06:27.698 ], 00:06:27.698 "product_name": "Malloc disk", 00:06:27.698 "block_size": 512, 00:06:27.698 "num_blocks": 262144, 00:06:27.698 "uuid": "f699f4b8-4a2d-11ef-9c8e-7947904e2597", 00:06:27.698 "assigned_rate_limits": { 00:06:27.698 "rw_ios_per_sec": 0, 00:06:27.698 "rw_mbytes_per_sec": 0, 00:06:27.698 "r_mbytes_per_sec": 0, 00:06:27.698 "w_mbytes_per_sec": 0 00:06:27.698 }, 00:06:27.698 "claimed": false, 00:06:27.698 "zoned": false, 00:06:27.698 "supported_io_types": { 00:06:27.698 "read": true, 00:06:27.698 "write": true, 00:06:27.698 "unmap": true, 00:06:27.698 "flush": true, 00:06:27.698 "reset": true, 00:06:27.698 "nvme_admin": false, 00:06:27.698 "nvme_io": false, 00:06:27.698 "nvme_io_md": false, 00:06:27.698 "write_zeroes": true, 00:06:27.698 "zcopy": true, 00:06:27.698 "get_zone_info": false, 00:06:27.698 "zone_management": false, 00:06:27.698 "zone_append": false, 00:06:27.698 "compare": false, 00:06:27.698 "compare_and_write": false, 00:06:27.698 "abort": true, 00:06:27.698 "seek_hole": false, 00:06:27.698 "seek_data": false, 00:06:27.698 "copy": true, 00:06:27.698 "nvme_iov_md": false 00:06:27.698 }, 00:06:27.698 "memory_domains": [ 00:06:27.698 { 00:06:27.698 "dma_device_id": "system", 00:06:27.698 "dma_device_type": 1 00:06:27.698 }, 00:06:27.698 { 00:06:27.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.698 "dma_device_type": 2 00:06:27.698 } 00:06:27.698 ], 00:06:27.698 "driver_specific": {} 00:06:27.698 } 00:06:27.698 ] 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:27.698 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.698 02:31:14 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.698 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:06:27.698 02:31:14 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:27.698 Running I/O for 5 seconds... 00:06:28.638 02:31:15 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 48432 00:06:28.638 Process is existed as continue on error is set. Pid: 48432 00:06:28.638 02:31:15 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 48432' 00:06:28.638 02:31:15 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.638 02:31:15 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:28.638 02:31:15 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.638 02:31:15 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:06:28.638 Timeout while waiting for response: 00:06:28.638 00:06:28.638 00:06:32.845 00:06:32.846 Latency(us) 00:06:32.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:32.846 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:32.846 EE_Dev_1 : 0.98 390110.16 1523.87 5.10 0.00 40.83 21.31 111.57 00:06:32.846 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:32.846 Dev_2 : 5.00 808867.67 3159.64 0.00 0.00 19.60 5.61 18050.46 00:06:32.846 =================================================================================================================== 00:06:32.846 Total : 1198977.83 4683.51 5.10 0.00 21.44 5.61 18050.46 00:06:33.782 02:31:20 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 48432 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48432 ']' 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48432 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48432 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:33.782 killing process with pid 48432 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48432' 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48432 00:06:33.782 Received shutdown signal, test time was about 5.000000 seconds 00:06:33.782 00:06:33.782 Latency(us) 00:06:33.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.782 =================================================================================================================== 00:06:33.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:33.782 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48432 00:06:34.041 02:31:20 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=48472 00:06:34.041 Process error testing pid: 48472 00:06:34.041 02:31:20 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 48472' 00:06:34.041 02:31:20 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:34.041 02:31:20 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 48472 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48472 ']' 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.041 02:31:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.041 [2024-07-25 02:31:20.719489] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:34.041 [2024-07-25 02:31:20.719753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:34.300 EAL: TSC is not safe to use in SMP mode 00:06:34.300 EAL: TSC is not invariant 00:06:34.300 [2024-07-25 02:31:21.146355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.559 [2024-07-25 02:31:21.236978] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:34.559 [2024-07-25 02:31:21.238628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:06:34.819 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 Dev_1 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 [ 00:06:34.819 { 00:06:34.819 "name": "Dev_1", 00:06:34.819 "aliases": [ 00:06:34.819 "faf8fb80-4a2d-11ef-9c8e-7947904e2597" 00:06:34.819 ], 00:06:34.819 "product_name": "Malloc disk", 00:06:34.819 "block_size": 512, 00:06:34.819 "num_blocks": 262144, 00:06:34.819 "uuid": "faf8fb80-4a2d-11ef-9c8e-7947904e2597", 00:06:34.819 "assigned_rate_limits": { 00:06:34.819 "rw_ios_per_sec": 0, 00:06:34.819 "rw_mbytes_per_sec": 0, 00:06:34.819 "r_mbytes_per_sec": 0, 00:06:34.819 "w_mbytes_per_sec": 0 00:06:34.819 }, 00:06:34.819 "claimed": false, 00:06:34.819 "zoned": false, 00:06:34.819 "supported_io_types": { 00:06:34.819 "read": true, 00:06:34.819 "write": true, 00:06:34.819 "unmap": true, 00:06:34.819 "flush": true, 00:06:34.819 "reset": true, 00:06:34.819 "nvme_admin": false, 00:06:34.819 "nvme_io": false, 00:06:34.819 "nvme_io_md": false, 00:06:34.819 "write_zeroes": true, 00:06:34.819 "zcopy": true, 00:06:34.819 "get_zone_info": false, 00:06:34.819 "zone_management": false, 00:06:34.819 "zone_append": false, 00:06:34.819 "compare": false, 00:06:34.819 "compare_and_write": false, 00:06:34.819 "abort": true, 00:06:34.819 "seek_hole": false, 00:06:34.819 "seek_data": false, 00:06:34.819 "copy": true, 00:06:34.819 "nvme_iov_md": false 00:06:34.819 }, 00:06:34.819 "memory_domains": [ 00:06:34.819 { 00:06:34.819 "dma_device_id": "system", 00:06:34.819 "dma_device_type": 1 00:06:34.819 }, 00:06:34.819 { 00:06:34.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.819 "dma_device_type": 2 00:06:34.819 } 00:06:34.819 ], 00:06:34.819 "driver_specific": {} 00:06:34.819 } 00:06:34.819 ] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:34.819 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 true 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:34.819 Dev_2 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.819 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:34.819 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 [ 00:06:35.079 { 00:06:35.079 "name": "Dev_2", 00:06:35.079 "aliases": [ 00:06:35.079 "fb018686-4a2d-11ef-9c8e-7947904e2597" 00:06:35.079 ], 00:06:35.079 "product_name": "Malloc disk", 00:06:35.079 "block_size": 512, 00:06:35.079 "num_blocks": 262144, 00:06:35.079 "uuid": "fb018686-4a2d-11ef-9c8e-7947904e2597", 00:06:35.079 "assigned_rate_limits": { 00:06:35.079 "rw_ios_per_sec": 0, 00:06:35.079 "rw_mbytes_per_sec": 0, 00:06:35.079 "r_mbytes_per_sec": 0, 00:06:35.079 "w_mbytes_per_sec": 0 00:06:35.079 }, 00:06:35.079 "claimed": false, 00:06:35.079 "zoned": false, 00:06:35.079 "supported_io_types": { 00:06:35.079 "read": true, 00:06:35.079 "write": true, 00:06:35.079 "unmap": true, 00:06:35.079 "flush": true, 00:06:35.079 "reset": true, 00:06:35.079 "nvme_admin": false, 00:06:35.079 "nvme_io": false, 00:06:35.079 "nvme_io_md": false, 00:06:35.079 "write_zeroes": true, 00:06:35.079 "zcopy": true, 00:06:35.079 "get_zone_info": false, 00:06:35.079 "zone_management": false, 00:06:35.079 "zone_append": false, 00:06:35.079 "compare": false, 00:06:35.079 "compare_and_write": false, 00:06:35.079 "abort": true, 00:06:35.079 "seek_hole": false, 00:06:35.079 "seek_data": false, 00:06:35.079 "copy": true, 00:06:35.079 "nvme_iov_md": false 00:06:35.079 }, 00:06:35.079 "memory_domains": [ 00:06:35.079 { 00:06:35.079 "dma_device_id": "system", 00:06:35.079 "dma_device_type": 1 00:06:35.079 }, 00:06:35.079 { 00:06:35.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.079 "dma_device_type": 2 00:06:35.079 } 00:06:35.079 ], 00:06:35.079 "driver_specific": {} 00:06:35.079 } 00:06:35.079 ] 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:35.079 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.079 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 48472 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:06:35.079 02:31:21 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48472 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.079 02:31:21 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48472 00:06:35.079 Running I/O for 5 seconds... 00:06:35.079 task offset: 11856 on job bdev=EE_Dev_1 fails 00:06:35.079 00:06:35.079 Latency(us) 00:06:35.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.079 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:35.079 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:35.079 EE_Dev_1 : 0.00 217821.78 850.87 49504.95 0.00 49.05 19.75 92.38 00:06:35.079 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:35.080 Dev_2 : 0.00 262295.08 1024.59 0.00 0.00 28.06 19.97 42.84 00:06:35.080 =================================================================================================================== 00:06:35.080 Total : 480116.86 1875.46 49504.95 0.00 37.67 19.75 92.38 00:06:35.080 [2024-07-25 02:31:21.839430] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.080 request: 00:06:35.080 { 00:06:35.080 "method": "perform_tests", 00:06:35.080 "req_id": 1 00:06:35.080 } 00:06:35.080 Got JSON-RPC error response 00:06:35.080 response: 00:06:35.080 { 00:06:35.080 "code": -32603, 00:06:35.080 "message": "bdevperf failed with error Operation not permitted" 00:06:35.080 } 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.339 00:06:35.339 real 0m8.713s 00:06:35.339 user 0m8.761s 00:06:35.339 sys 0m1.017s 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.339 02:31:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:35.339 ************************************ 00:06:35.339 END TEST bdev_error 00:06:35.339 ************************************ 00:06:35.339 02:31:22 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:35.339 02:31:22 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:06:35.339 02:31:22 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.339 02:31:22 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.339 02:31:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:35.339 ************************************ 00:06:35.339 START TEST bdev_stat 00:06:35.339 ************************************ 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=48503 00:06:35.339 Process Bdev IO statistics testing pid: 48503 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 48503' 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 48503 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48503 ']' 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.339 02:31:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:35.339 [2024-07-25 02:31:22.097642] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:35.339 [2024-07-25 02:31:22.097802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:35.905 EAL: TSC is not safe to use in SMP mode 00:06:35.905 EAL: TSC is not invariant 00:06:35.905 [2024-07-25 02:31:22.545747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.905 [2024-07-25 02:31:22.636698] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:35.905 [2024-07-25 02:31:22.636733] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:35.905 [2024-07-25 02:31:22.638883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.905 [2024-07-25 02:31:22.638882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:36.163 Malloc_STAT 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.163 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:36.163 [ 00:06:36.163 { 00:06:36.163 "name": "Malloc_STAT", 00:06:36.163 "aliases": [ 00:06:36.163 "fbcc33cb-4a2d-11ef-9c8e-7947904e2597" 00:06:36.163 ], 00:06:36.163 "product_name": "Malloc disk", 00:06:36.163 "block_size": 512, 00:06:36.163 "num_blocks": 262144, 00:06:36.163 "uuid": "fbcc33cb-4a2d-11ef-9c8e-7947904e2597", 00:06:36.163 "assigned_rate_limits": { 00:06:36.163 "rw_ios_per_sec": 0, 00:06:36.163 "rw_mbytes_per_sec": 0, 00:06:36.422 "r_mbytes_per_sec": 0, 00:06:36.422 "w_mbytes_per_sec": 0 00:06:36.422 }, 00:06:36.422 "claimed": false, 00:06:36.422 "zoned": false, 00:06:36.422 "supported_io_types": { 00:06:36.422 "read": true, 00:06:36.422 "write": true, 00:06:36.422 "unmap": true, 00:06:36.422 "flush": true, 00:06:36.422 "reset": true, 00:06:36.422 "nvme_admin": false, 00:06:36.422 "nvme_io": false, 00:06:36.422 "nvme_io_md": false, 00:06:36.422 "write_zeroes": true, 00:06:36.422 "zcopy": true, 00:06:36.422 "get_zone_info": false, 00:06:36.422 "zone_management": false, 00:06:36.422 "zone_append": false, 00:06:36.422 "compare": false, 00:06:36.422 "compare_and_write": false, 00:06:36.422 "abort": true, 00:06:36.422 "seek_hole": false, 00:06:36.422 "seek_data": false, 00:06:36.422 "copy": true, 00:06:36.422 "nvme_iov_md": false 00:06:36.422 }, 00:06:36.422 "memory_domains": [ 00:06:36.422 { 00:06:36.422 "dma_device_id": "system", 00:06:36.422 "dma_device_type": 1 00:06:36.422 }, 00:06:36.422 { 00:06:36.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.422 "dma_device_type": 2 00:06:36.422 } 00:06:36.422 ], 00:06:36.422 "driver_specific": {} 00:06:36.422 } 00:06:36.422 ] 00:06:36.422 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.422 02:31:23 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:06:36.422 02:31:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:06:36.422 02:31:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:36.422 Running I/O for 10 seconds... 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:38.321 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:06:38.322 "tick_rate": 2294609042, 00:06:38.322 "ticks": 742266908936, 00:06:38.322 "bdevs": [ 00:06:38.322 { 00:06:38.322 "name": "Malloc_STAT", 00:06:38.322 "bytes_read": 13929320960, 00:06:38.322 "num_read_ops": 3400707, 00:06:38.322 "bytes_written": 0, 00:06:38.322 "num_write_ops": 0, 00:06:38.322 "bytes_unmapped": 0, 00:06:38.322 "num_unmap_ops": 0, 00:06:38.322 "bytes_copied": 0, 00:06:38.322 "num_copy_ops": 0, 00:06:38.322 "read_latency_ticks": 2307082518942, 00:06:38.322 "max_read_latency_ticks": 958784, 00:06:38.322 "min_read_latency_ticks": 31710, 00:06:38.322 "write_latency_ticks": 0, 00:06:38.322 "max_write_latency_ticks": 0, 00:06:38.322 "min_write_latency_ticks": 0, 00:06:38.322 "unmap_latency_ticks": 0, 00:06:38.322 "max_unmap_latency_ticks": 0, 00:06:38.322 "min_unmap_latency_ticks": 0, 00:06:38.322 "copy_latency_ticks": 0, 00:06:38.322 "max_copy_latency_ticks": 0, 00:06:38.322 "min_copy_latency_ticks": 0, 00:06:38.322 "io_error": {} 00:06:38.322 } 00:06:38.322 ] 00:06:38.322 }' 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=3400707 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:06:38.322 "tick_rate": 2294609042, 00:06:38.322 "ticks": 742343028154, 00:06:38.322 "name": "Malloc_STAT", 00:06:38.322 "channels": [ 00:06:38.322 { 00:06:38.322 "thread_id": 2, 00:06:38.322 "bytes_read": 7041187840, 00:06:38.322 "num_read_ops": 1719040, 00:06:38.322 "bytes_written": 0, 00:06:38.322 "num_write_ops": 0, 00:06:38.322 "bytes_unmapped": 0, 00:06:38.322 "num_unmap_ops": 0, 00:06:38.322 "bytes_copied": 0, 00:06:38.322 "num_copy_ops": 0, 00:06:38.322 "read_latency_ticks": 1172928951582, 00:06:38.322 "max_read_latency_ticks": 877094, 00:06:38.322 "min_read_latency_ticks": 630810, 00:06:38.322 "write_latency_ticks": 0, 00:06:38.322 "max_write_latency_ticks": 0, 00:06:38.322 "min_write_latency_ticks": 0, 00:06:38.322 "unmap_latency_ticks": 0, 00:06:38.322 "max_unmap_latency_ticks": 0, 00:06:38.322 "min_unmap_latency_ticks": 0, 00:06:38.322 "copy_latency_ticks": 0, 00:06:38.322 "max_copy_latency_ticks": 0, 00:06:38.322 "min_copy_latency_ticks": 0 00:06:38.322 }, 00:06:38.322 { 00:06:38.322 "thread_id": 3, 00:06:38.322 "bytes_read": 7115636736, 00:06:38.322 "num_read_ops": 1737216, 00:06:38.322 "bytes_written": 0, 00:06:38.322 "num_write_ops": 0, 00:06:38.322 "bytes_unmapped": 0, 00:06:38.322 "num_unmap_ops": 0, 00:06:38.322 "bytes_copied": 0, 00:06:38.322 "num_copy_ops": 0, 00:06:38.322 "read_latency_ticks": 1173015184188, 00:06:38.322 "max_read_latency_ticks": 958784, 00:06:38.322 "min_read_latency_ticks": 629464, 00:06:38.322 "write_latency_ticks": 0, 00:06:38.322 "max_write_latency_ticks": 0, 00:06:38.322 "min_write_latency_ticks": 0, 00:06:38.322 "unmap_latency_ticks": 0, 00:06:38.322 "max_unmap_latency_ticks": 0, 00:06:38.322 "min_unmap_latency_ticks": 0, 00:06:38.322 "copy_latency_ticks": 0, 00:06:38.322 "max_copy_latency_ticks": 0, 00:06:38.322 "min_copy_latency_ticks": 0 00:06:38.322 } 00:06:38.322 ] 00:06:38.322 }' 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=1719040 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=1719040 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=1737216 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=3456256 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:38.322 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.580 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:06:38.580 "tick_rate": 2294609042, 00:06:38.580 "ticks": 742457178746, 00:06:38.580 "bdevs": [ 00:06:38.580 { 00:06:38.580 "name": "Malloc_STAT", 00:06:38.580 "bytes_read": 14498697728, 00:06:38.580 "num_read_ops": 3539715, 00:06:38.580 "bytes_written": 0, 00:06:38.580 "num_write_ops": 0, 00:06:38.580 "bytes_unmapped": 0, 00:06:38.580 "num_unmap_ops": 0, 00:06:38.580 "bytes_copied": 0, 00:06:38.580 "num_copy_ops": 0, 00:06:38.580 "read_latency_ticks": 2404355949310, 00:06:38.580 "max_read_latency_ticks": 958784, 00:06:38.580 "min_read_latency_ticks": 31710, 00:06:38.580 "write_latency_ticks": 0, 00:06:38.580 "max_write_latency_ticks": 0, 00:06:38.580 "min_write_latency_ticks": 0, 00:06:38.580 "unmap_latency_ticks": 0, 00:06:38.580 "max_unmap_latency_ticks": 0, 00:06:38.580 "min_unmap_latency_ticks": 0, 00:06:38.580 "copy_latency_ticks": 0, 00:06:38.580 "max_copy_latency_ticks": 0, 00:06:38.580 "min_copy_latency_ticks": 0, 00:06:38.580 "io_error": {} 00:06:38.580 } 00:06:38.580 ] 00:06:38.580 }' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=3539715 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 3456256 -lt 3400707 ']' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 3456256 -gt 3539715 ']' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:38.581 00:06:38.581 Latency(us) 00:06:38.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.581 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:38.581 Malloc_STAT : 2.08 859208.57 3356.28 0.00 0.00 297.73 46.63 389.14 00:06:38.581 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:38.581 Malloc_STAT : 2.08 868251.85 3391.61 0.00 0.00 294.63 43.96 419.49 00:06:38.581 =================================================================================================================== 00:06:38.581 Total : 1727460.42 6747.89 0.00 0.00 296.17 43.96 419.49 00:06:38.581 0 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 48503 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48503 ']' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48503 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48503 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:38.581 killing process with pid 48503 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48503' 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48503 00:06:38.581 Received shutdown signal, test time was about 2.118787 seconds 00:06:38.581 00:06:38.581 Latency(us) 00:06:38.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.581 =================================================================================================================== 00:06:38.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48503 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:06:38.581 00:06:38.581 real 0m3.336s 00:06:38.581 user 0m5.998s 00:06:38.581 sys 0m0.626s 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.581 02:31:25 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:38.581 ************************************ 00:06:38.581 END TEST bdev_stat 00:06:38.581 ************************************ 00:06:38.839 02:31:25 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:06:38.839 02:31:25 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:06:38.839 00:06:38.839 real 1m29.896s 00:06:38.839 user 4m27.060s 00:06:38.839 sys 0m23.497s 00:06:38.839 02:31:25 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.839 02:31:25 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 ************************************ 00:06:38.839 END TEST blockdev_general 00:06:38.839 ************************************ 00:06:38.839 02:31:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.839 02:31:25 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:38.839 02:31:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.839 02:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.839 02:31:25 -- common/autotest_common.sh@10 -- # set +x 00:06:38.839 ************************************ 00:06:38.839 START TEST bdev_raid 00:06:38.839 ************************************ 00:06:38.839 02:31:25 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:38.839 * Looking for test storage... 00:06:38.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.839 02:31:25 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:06:38.839 02:31:25 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:06:39.098 02:31:25 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:06:39.098 02:31:25 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:06:39.098 02:31:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.098 02:31:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.098 02:31:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.098 ************************************ 00:06:39.098 START TEST raid0_resize_test 00:06:39.098 ************************************ 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48604 00:06:39.098 Process raid pid: 48604 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48604' 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48604 /var/tmp/spdk-raid.sock 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48604 ']' 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.098 02:31:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.098 [2024-07-25 02:31:25.749355] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:39.098 [2024-07-25 02:31:25.749705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:39.356 EAL: TSC is not safe to use in SMP mode 00:06:39.356 EAL: TSC is not invariant 00:06:39.356 [2024-07-25 02:31:26.171375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.614 [2024-07-25 02:31:26.262590] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:39.614 [2024-07-25 02:31:26.264303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.614 [2024-07-25 02:31:26.264974] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.614 [2024-07-25 02:31:26.264987] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.872 02:31:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.872 02:31:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:06:39.872 02:31:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:40.130 Base_1 00:06:40.130 02:31:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:40.427 Base_2 00:06:40.427 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:40.427 [2024-07-25 02:31:27.215837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:40.427 [2024-07-25 02:31:27.216238] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:40.427 [2024-07-25 02:31:27.216259] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x397ecf034a00 00:06:40.427 [2024-07-25 02:31:27.216262] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:40.427 [2024-07-25 02:31:27.216305] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x397ecf097e20 00:06:40.427 [2024-07-25 02:31:27.216346] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x397ecf034a00 00:06:40.427 [2024-07-25 02:31:27.216350] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x397ecf034a00 00:06:40.427 [2024-07-25 02:31:27.216375] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.427 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:40.686 [2024-07-25 02:31:27.407831] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.686 [2024-07-25 02:31:27.407848] bdev_raid.c:2302:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:40.686 true 00:06:40.686 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:40.686 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:06:40.944 [2024-07-25 02:31:27.599857] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:40.944 [2024-07-25 02:31:27.795831] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.944 [2024-07-25 02:31:27.795850] bdev_raid.c:2302:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:40.944 [2024-07-25 02:31:27.795871] bdev_raid.c:2316:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:40.944 true 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:40.944 02:31:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:06:41.203 [2024-07-25 02:31:27.987842] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48604 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48604 ']' 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48604 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48604 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:06:41.203 killing process with pid 48604 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48604' 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48604 00:06:41.203 [2024-07-25 02:31:28.019583] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:41.203 [2024-07-25 02:31:28.019598] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.203 [2024-07-25 02:31:28.019616] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.203 [2024-07-25 02:31:28.019619] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x397ecf034a00 name Raid, state offline 00:06:41.203 [2024-07-25 02:31:28.019725] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.203 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48604 00:06:41.462 02:31:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:06:41.462 00:06:41.462 real 0m2.453s 00:06:41.462 user 0m3.531s 00:06:41.462 sys 0m0.644s 00:06:41.462 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.462 02:31:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.462 ************************************ 00:06:41.462 END TEST raid0_resize_test 00:06:41.462 ************************************ 00:06:41.462 02:31:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:06:41.463 02:31:28 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:06:41.463 02:31:28 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:06:41.463 02:31:28 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:41.463 02:31:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:41.463 02:31:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.463 02:31:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.463 ************************************ 00:06:41.463 START TEST raid_state_function_test 00:06:41.463 ************************************ 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48650 00:06:41.463 Process raid pid: 48650 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48650' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48650 /var/tmp/spdk-raid.sock 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48650 ']' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.463 02:31:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.463 [2024-07-25 02:31:28.262225] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:41.463 [2024-07-25 02:31:28.262482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:42.032 EAL: TSC is not safe to use in SMP mode 00:06:42.032 EAL: TSC is not invariant 00:06:42.032 [2024-07-25 02:31:28.685539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.032 [2024-07-25 02:31:28.778560] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:42.032 [2024-07-25 02:31:28.780215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.032 [2024-07-25 02:31:28.780861] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.032 [2024-07-25 02:31:28.780872] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.292 02:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.292 02:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:06:42.292 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:42.552 [2024-07-25 02:31:29.295770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:42.552 [2024-07-25 02:31:29.295826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:42.552 [2024-07-25 02:31:29.295830] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:42.552 [2024-07-25 02:31:29.295836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.552 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.840 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:42.840 "name": "Existed_Raid", 00:06:42.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.841 "strip_size_kb": 64, 00:06:42.841 "state": "configuring", 00:06:42.841 "raid_level": "raid0", 00:06:42.841 "superblock": false, 00:06:42.841 "num_base_bdevs": 2, 00:06:42.841 "num_base_bdevs_discovered": 0, 00:06:42.841 "num_base_bdevs_operational": 2, 00:06:42.841 "base_bdevs_list": [ 00:06:42.841 { 00:06:42.841 "name": "BaseBdev1", 00:06:42.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.841 "is_configured": false, 00:06:42.841 "data_offset": 0, 00:06:42.841 "data_size": 0 00:06:42.841 }, 00:06:42.841 { 00:06:42.841 "name": "BaseBdev2", 00:06:42.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.841 "is_configured": false, 00:06:42.841 "data_offset": 0, 00:06:42.841 "data_size": 0 00:06:42.841 } 00:06:42.841 ] 00:06:42.841 }' 00:06:42.841 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:42.841 02:31:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.100 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:43.100 [2024-07-25 02:31:29.939758] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:43.100 [2024-07-25 02:31:29.939775] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x316e83e34500 name Existed_Raid, state configuring 00:06:43.100 02:31:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:43.359 [2024-07-25 02:31:30.127766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:43.359 [2024-07-25 02:31:30.127791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:43.359 [2024-07-25 02:31:30.127794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:43.359 [2024-07-25 02:31:30.127800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:43.359 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:43.618 [2024-07-25 02:31:30.316617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:43.618 BaseBdev1 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:43.618 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:43.877 [ 00:06:43.877 { 00:06:43.877 "name": "BaseBdev1", 00:06:43.877 "aliases": [ 00:06:43.877 "00251c79-4a2e-11ef-9c8e-7947904e2597" 00:06:43.877 ], 00:06:43.877 "product_name": "Malloc disk", 00:06:43.877 "block_size": 512, 00:06:43.877 "num_blocks": 65536, 00:06:43.877 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:43.877 "assigned_rate_limits": { 00:06:43.877 "rw_ios_per_sec": 0, 00:06:43.877 "rw_mbytes_per_sec": 0, 00:06:43.877 "r_mbytes_per_sec": 0, 00:06:43.877 "w_mbytes_per_sec": 0 00:06:43.877 }, 00:06:43.877 "claimed": true, 00:06:43.877 "claim_type": "exclusive_write", 00:06:43.877 "zoned": false, 00:06:43.877 "supported_io_types": { 00:06:43.877 "read": true, 00:06:43.877 "write": true, 00:06:43.877 "unmap": true, 00:06:43.877 "flush": true, 00:06:43.877 "reset": true, 00:06:43.877 "nvme_admin": false, 00:06:43.877 "nvme_io": false, 00:06:43.877 "nvme_io_md": false, 00:06:43.877 "write_zeroes": true, 00:06:43.877 "zcopy": true, 00:06:43.877 "get_zone_info": false, 00:06:43.877 "zone_management": false, 00:06:43.877 "zone_append": false, 00:06:43.877 "compare": false, 00:06:43.877 "compare_and_write": false, 00:06:43.877 "abort": true, 00:06:43.877 "seek_hole": false, 00:06:43.877 "seek_data": false, 00:06:43.877 "copy": true, 00:06:43.877 "nvme_iov_md": false 00:06:43.877 }, 00:06:43.877 "memory_domains": [ 00:06:43.877 { 00:06:43.877 "dma_device_id": "system", 00:06:43.877 "dma_device_type": 1 00:06:43.877 }, 00:06:43.877 { 00:06:43.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.877 "dma_device_type": 2 00:06:43.877 } 00:06:43.877 ], 00:06:43.877 "driver_specific": {} 00:06:43.877 } 00:06:43.877 ] 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:43.877 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.136 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:44.136 "name": "Existed_Raid", 00:06:44.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.136 "strip_size_kb": 64, 00:06:44.136 "state": "configuring", 00:06:44.136 "raid_level": "raid0", 00:06:44.136 "superblock": false, 00:06:44.136 "num_base_bdevs": 2, 00:06:44.136 "num_base_bdevs_discovered": 1, 00:06:44.136 "num_base_bdevs_operational": 2, 00:06:44.136 "base_bdevs_list": [ 00:06:44.136 { 00:06:44.136 "name": "BaseBdev1", 00:06:44.136 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:44.136 "is_configured": true, 00:06:44.136 "data_offset": 0, 00:06:44.136 "data_size": 65536 00:06:44.136 }, 00:06:44.136 { 00:06:44.136 "name": "BaseBdev2", 00:06:44.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.136 "is_configured": false, 00:06:44.136 "data_offset": 0, 00:06:44.136 "data_size": 0 00:06:44.136 } 00:06:44.136 ] 00:06:44.136 }' 00:06:44.136 02:31:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:44.136 02:31:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.395 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:44.653 [2024-07-25 02:31:31.335817] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:44.653 [2024-07-25 02:31:31.335837] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x316e83e34500 name Existed_Raid, state configuring 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:44.654 [2024-07-25 02:31:31.519827] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.654 [2024-07-25 02:31:31.520496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.654 [2024-07-25 02:31:31.520529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:44.654 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:44.913 "name": "Existed_Raid", 00:06:44.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.913 "strip_size_kb": 64, 00:06:44.913 "state": "configuring", 00:06:44.913 "raid_level": "raid0", 00:06:44.913 "superblock": false, 00:06:44.913 "num_base_bdevs": 2, 00:06:44.913 "num_base_bdevs_discovered": 1, 00:06:44.913 "num_base_bdevs_operational": 2, 00:06:44.913 "base_bdevs_list": [ 00:06:44.913 { 00:06:44.913 "name": "BaseBdev1", 00:06:44.913 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:44.913 "is_configured": true, 00:06:44.913 "data_offset": 0, 00:06:44.913 "data_size": 65536 00:06:44.913 }, 00:06:44.913 { 00:06:44.913 "name": "BaseBdev2", 00:06:44.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.913 "is_configured": false, 00:06:44.913 "data_offset": 0, 00:06:44.913 "data_size": 0 00:06:44.913 } 00:06:44.913 ] 00:06:44.913 }' 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:44.913 02:31:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.172 02:31:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:45.432 [2024-07-25 02:31:32.163956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:45.432 [2024-07-25 02:31:32.163978] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x316e83e34a00 00:06:45.432 [2024-07-25 02:31:32.163981] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:45.432 [2024-07-25 02:31:32.164000] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x316e83e97e20 00:06:45.432 [2024-07-25 02:31:32.164086] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x316e83e34a00 00:06:45.432 [2024-07-25 02:31:32.164089] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x316e83e34a00 00:06:45.432 [2024-07-25 02:31:32.164114] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.432 BaseBdev2 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:45.432 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:45.691 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:45.691 [ 00:06:45.691 { 00:06:45.691 "name": "BaseBdev2", 00:06:45.691 "aliases": [ 00:06:45.691 "013f1a26-4a2e-11ef-9c8e-7947904e2597" 00:06:45.691 ], 00:06:45.691 "product_name": "Malloc disk", 00:06:45.691 "block_size": 512, 00:06:45.691 "num_blocks": 65536, 00:06:45.691 "uuid": "013f1a26-4a2e-11ef-9c8e-7947904e2597", 00:06:45.691 "assigned_rate_limits": { 00:06:45.691 "rw_ios_per_sec": 0, 00:06:45.691 "rw_mbytes_per_sec": 0, 00:06:45.691 "r_mbytes_per_sec": 0, 00:06:45.691 "w_mbytes_per_sec": 0 00:06:45.691 }, 00:06:45.691 "claimed": true, 00:06:45.691 "claim_type": "exclusive_write", 00:06:45.691 "zoned": false, 00:06:45.691 "supported_io_types": { 00:06:45.691 "read": true, 00:06:45.691 "write": true, 00:06:45.691 "unmap": true, 00:06:45.691 "flush": true, 00:06:45.691 "reset": true, 00:06:45.691 "nvme_admin": false, 00:06:45.691 "nvme_io": false, 00:06:45.691 "nvme_io_md": false, 00:06:45.691 "write_zeroes": true, 00:06:45.691 "zcopy": true, 00:06:45.691 "get_zone_info": false, 00:06:45.691 "zone_management": false, 00:06:45.691 "zone_append": false, 00:06:45.691 "compare": false, 00:06:45.691 "compare_and_write": false, 00:06:45.691 "abort": true, 00:06:45.691 "seek_hole": false, 00:06:45.691 "seek_data": false, 00:06:45.691 "copy": true, 00:06:45.692 "nvme_iov_md": false 00:06:45.692 }, 00:06:45.692 "memory_domains": [ 00:06:45.692 { 00:06:45.692 "dma_device_id": "system", 00:06:45.692 "dma_device_type": 1 00:06:45.692 }, 00:06:45.692 { 00:06:45.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:45.692 "dma_device_type": 2 00:06:45.692 } 00:06:45.692 ], 00:06:45.692 "driver_specific": {} 00:06:45.692 } 00:06:45.692 ] 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:45.692 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:45.951 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:45.951 "name": "Existed_Raid", 00:06:45.951 "uuid": "013f1fb6-4a2e-11ef-9c8e-7947904e2597", 00:06:45.951 "strip_size_kb": 64, 00:06:45.951 "state": "online", 00:06:45.951 "raid_level": "raid0", 00:06:45.951 "superblock": false, 00:06:45.951 "num_base_bdevs": 2, 00:06:45.951 "num_base_bdevs_discovered": 2, 00:06:45.951 "num_base_bdevs_operational": 2, 00:06:45.951 "base_bdevs_list": [ 00:06:45.951 { 00:06:45.951 "name": "BaseBdev1", 00:06:45.951 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:45.951 "is_configured": true, 00:06:45.951 "data_offset": 0, 00:06:45.951 "data_size": 65536 00:06:45.951 }, 00:06:45.951 { 00:06:45.951 "name": "BaseBdev2", 00:06:45.951 "uuid": "013f1a26-4a2e-11ef-9c8e-7947904e2597", 00:06:45.951 "is_configured": true, 00:06:45.951 "data_offset": 0, 00:06:45.951 "data_size": 65536 00:06:45.951 } 00:06:45.951 ] 00:06:45.951 }' 00:06:45.951 02:31:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:45.951 02:31:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:46.211 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:46.471 [2024-07-25 02:31:33.199866] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:46.471 "name": "Existed_Raid", 00:06:46.471 "aliases": [ 00:06:46.471 "013f1fb6-4a2e-11ef-9c8e-7947904e2597" 00:06:46.471 ], 00:06:46.471 "product_name": "Raid Volume", 00:06:46.471 "block_size": 512, 00:06:46.471 "num_blocks": 131072, 00:06:46.471 "uuid": "013f1fb6-4a2e-11ef-9c8e-7947904e2597", 00:06:46.471 "assigned_rate_limits": { 00:06:46.471 "rw_ios_per_sec": 0, 00:06:46.471 "rw_mbytes_per_sec": 0, 00:06:46.471 "r_mbytes_per_sec": 0, 00:06:46.471 "w_mbytes_per_sec": 0 00:06:46.471 }, 00:06:46.471 "claimed": false, 00:06:46.471 "zoned": false, 00:06:46.471 "supported_io_types": { 00:06:46.471 "read": true, 00:06:46.471 "write": true, 00:06:46.471 "unmap": true, 00:06:46.471 "flush": true, 00:06:46.471 "reset": true, 00:06:46.471 "nvme_admin": false, 00:06:46.471 "nvme_io": false, 00:06:46.471 "nvme_io_md": false, 00:06:46.471 "write_zeroes": true, 00:06:46.471 "zcopy": false, 00:06:46.471 "get_zone_info": false, 00:06:46.471 "zone_management": false, 00:06:46.471 "zone_append": false, 00:06:46.471 "compare": false, 00:06:46.471 "compare_and_write": false, 00:06:46.471 "abort": false, 00:06:46.471 "seek_hole": false, 00:06:46.471 "seek_data": false, 00:06:46.471 "copy": false, 00:06:46.471 "nvme_iov_md": false 00:06:46.471 }, 00:06:46.471 "memory_domains": [ 00:06:46.471 { 00:06:46.471 "dma_device_id": "system", 00:06:46.471 "dma_device_type": 1 00:06:46.471 }, 00:06:46.471 { 00:06:46.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.471 "dma_device_type": 2 00:06:46.471 }, 00:06:46.471 { 00:06:46.471 "dma_device_id": "system", 00:06:46.471 "dma_device_type": 1 00:06:46.471 }, 00:06:46.471 { 00:06:46.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.471 "dma_device_type": 2 00:06:46.471 } 00:06:46.471 ], 00:06:46.471 "driver_specific": { 00:06:46.471 "raid": { 00:06:46.471 "uuid": "013f1fb6-4a2e-11ef-9c8e-7947904e2597", 00:06:46.471 "strip_size_kb": 64, 00:06:46.471 "state": "online", 00:06:46.471 "raid_level": "raid0", 00:06:46.471 "superblock": false, 00:06:46.471 "num_base_bdevs": 2, 00:06:46.471 "num_base_bdevs_discovered": 2, 00:06:46.471 "num_base_bdevs_operational": 2, 00:06:46.471 "base_bdevs_list": [ 00:06:46.471 { 00:06:46.471 "name": "BaseBdev1", 00:06:46.471 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:46.471 "is_configured": true, 00:06:46.471 "data_offset": 0, 00:06:46.471 "data_size": 65536 00:06:46.471 }, 00:06:46.471 { 00:06:46.471 "name": "BaseBdev2", 00:06:46.471 "uuid": "013f1a26-4a2e-11ef-9c8e-7947904e2597", 00:06:46.471 "is_configured": true, 00:06:46.471 "data_offset": 0, 00:06:46.471 "data_size": 65536 00:06:46.471 } 00:06:46.471 ] 00:06:46.471 } 00:06:46.471 } 00:06:46.471 }' 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:46.471 BaseBdev2' 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:46.471 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:46.730 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:46.730 "name": "BaseBdev1", 00:06:46.730 "aliases": [ 00:06:46.730 "00251c79-4a2e-11ef-9c8e-7947904e2597" 00:06:46.730 ], 00:06:46.730 "product_name": "Malloc disk", 00:06:46.730 "block_size": 512, 00:06:46.730 "num_blocks": 65536, 00:06:46.730 "uuid": "00251c79-4a2e-11ef-9c8e-7947904e2597", 00:06:46.730 "assigned_rate_limits": { 00:06:46.730 "rw_ios_per_sec": 0, 00:06:46.730 "rw_mbytes_per_sec": 0, 00:06:46.730 "r_mbytes_per_sec": 0, 00:06:46.730 "w_mbytes_per_sec": 0 00:06:46.730 }, 00:06:46.730 "claimed": true, 00:06:46.730 "claim_type": "exclusive_write", 00:06:46.730 "zoned": false, 00:06:46.730 "supported_io_types": { 00:06:46.730 "read": true, 00:06:46.730 "write": true, 00:06:46.730 "unmap": true, 00:06:46.730 "flush": true, 00:06:46.730 "reset": true, 00:06:46.730 "nvme_admin": false, 00:06:46.730 "nvme_io": false, 00:06:46.730 "nvme_io_md": false, 00:06:46.730 "write_zeroes": true, 00:06:46.730 "zcopy": true, 00:06:46.730 "get_zone_info": false, 00:06:46.730 "zone_management": false, 00:06:46.730 "zone_append": false, 00:06:46.730 "compare": false, 00:06:46.730 "compare_and_write": false, 00:06:46.730 "abort": true, 00:06:46.730 "seek_hole": false, 00:06:46.730 "seek_data": false, 00:06:46.730 "copy": true, 00:06:46.730 "nvme_iov_md": false 00:06:46.730 }, 00:06:46.730 "memory_domains": [ 00:06:46.730 { 00:06:46.731 "dma_device_id": "system", 00:06:46.731 "dma_device_type": 1 00:06:46.731 }, 00:06:46.731 { 00:06:46.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.731 "dma_device_type": 2 00:06:46.731 } 00:06:46.731 ], 00:06:46.731 "driver_specific": {} 00:06:46.731 }' 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:46.731 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:46.989 "name": "BaseBdev2", 00:06:46.989 "aliases": [ 00:06:46.989 "013f1a26-4a2e-11ef-9c8e-7947904e2597" 00:06:46.989 ], 00:06:46.989 "product_name": "Malloc disk", 00:06:46.989 "block_size": 512, 00:06:46.989 "num_blocks": 65536, 00:06:46.989 "uuid": "013f1a26-4a2e-11ef-9c8e-7947904e2597", 00:06:46.989 "assigned_rate_limits": { 00:06:46.989 "rw_ios_per_sec": 0, 00:06:46.989 "rw_mbytes_per_sec": 0, 00:06:46.989 "r_mbytes_per_sec": 0, 00:06:46.989 "w_mbytes_per_sec": 0 00:06:46.989 }, 00:06:46.989 "claimed": true, 00:06:46.989 "claim_type": "exclusive_write", 00:06:46.989 "zoned": false, 00:06:46.989 "supported_io_types": { 00:06:46.989 "read": true, 00:06:46.989 "write": true, 00:06:46.989 "unmap": true, 00:06:46.989 "flush": true, 00:06:46.989 "reset": true, 00:06:46.989 "nvme_admin": false, 00:06:46.989 "nvme_io": false, 00:06:46.989 "nvme_io_md": false, 00:06:46.989 "write_zeroes": true, 00:06:46.989 "zcopy": true, 00:06:46.989 "get_zone_info": false, 00:06:46.989 "zone_management": false, 00:06:46.989 "zone_append": false, 00:06:46.989 "compare": false, 00:06:46.989 "compare_and_write": false, 00:06:46.989 "abort": true, 00:06:46.989 "seek_hole": false, 00:06:46.989 "seek_data": false, 00:06:46.989 "copy": true, 00:06:46.989 "nvme_iov_md": false 00:06:46.989 }, 00:06:46.989 "memory_domains": [ 00:06:46.989 { 00:06:46.989 "dma_device_id": "system", 00:06:46.989 "dma_device_type": 1 00:06:46.989 }, 00:06:46.989 { 00:06:46.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.989 "dma_device_type": 2 00:06:46.989 } 00:06:46.989 ], 00:06:46.989 "driver_specific": {} 00:06:46.989 }' 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:46.989 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:47.249 [2024-07-25 02:31:33.955877] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:47.249 [2024-07-25 02:31:33.955891] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.249 [2024-07-25 02:31:33.955902] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:47.249 02:31:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.507 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:47.507 "name": "Existed_Raid", 00:06:47.507 "uuid": "013f1fb6-4a2e-11ef-9c8e-7947904e2597", 00:06:47.507 "strip_size_kb": 64, 00:06:47.507 "state": "offline", 00:06:47.507 "raid_level": "raid0", 00:06:47.507 "superblock": false, 00:06:47.507 "num_base_bdevs": 2, 00:06:47.507 "num_base_bdevs_discovered": 1, 00:06:47.507 "num_base_bdevs_operational": 1, 00:06:47.507 "base_bdevs_list": [ 00:06:47.507 { 00:06:47.507 "name": null, 00:06:47.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.507 "is_configured": false, 00:06:47.507 "data_offset": 0, 00:06:47.507 "data_size": 65536 00:06:47.507 }, 00:06:47.507 { 00:06:47.507 "name": "BaseBdev2", 00:06:47.507 "uuid": "013f1a26-4a2e-11ef-9c8e-7947904e2597", 00:06:47.507 "is_configured": true, 00:06:47.507 "data_offset": 0, 00:06:47.507 "data_size": 65536 00:06:47.507 } 00:06:47.507 ] 00:06:47.507 }' 00:06:47.508 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:47.508 02:31:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:47.767 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:48.027 [2024-07-25 02:31:34.800516] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:48.027 [2024-07-25 02:31:34.800534] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x316e83e34a00 name Existed_Raid, state offline 00:06:48.027 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:48.027 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:48.027 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:48.027 02:31:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48650 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48650 ']' 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48650 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48650 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:06:48.286 killing process with pid 48650 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48650' 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48650 00:06:48.286 [2024-07-25 02:31:35.018365] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.286 [2024-07-25 02:31:35.018399] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.286 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48650 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:06:48.546 00:06:48.546 real 0m6.942s 00:06:48.546 user 0m11.800s 00:06:48.546 sys 0m1.427s 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.546 ************************************ 00:06:48.546 END TEST raid_state_function_test 00:06:48.546 ************************************ 00:06:48.546 02:31:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:06:48.546 02:31:35 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:48.546 02:31:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:48.546 02:31:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.546 02:31:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.546 ************************************ 00:06:48.546 START TEST raid_state_function_test_sb 00:06:48.546 ************************************ 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:48.546 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48917 00:06:48.547 Process raid pid: 48917 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48917' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48917 /var/tmp/spdk-raid.sock 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48917 ']' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.547 02:31:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.547 [2024-07-25 02:31:35.254351] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:48.547 [2024-07-25 02:31:35.254600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:48.806 EAL: TSC is not safe to use in SMP mode 00:06:48.806 EAL: TSC is not invariant 00:06:48.806 [2024-07-25 02:31:35.672360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.066 [2024-07-25 02:31:35.762860] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:49.066 [2024-07-25 02:31:35.764513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.066 [2024-07-25 02:31:35.765075] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.066 [2024-07-25 02:31:35.765086] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.326 02:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.326 02:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:06:49.326 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:49.586 [2024-07-25 02:31:36.300024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:49.586 [2024-07-25 02:31:36.300069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:49.586 [2024-07-25 02:31:36.300091] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.586 [2024-07-25 02:31:36.300098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:49.586 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.846 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:49.846 "name": "Existed_Raid", 00:06:49.846 "uuid": "03b63b73-4a2e-11ef-9c8e-7947904e2597", 00:06:49.846 "strip_size_kb": 64, 00:06:49.846 "state": "configuring", 00:06:49.846 "raid_level": "raid0", 00:06:49.846 "superblock": true, 00:06:49.846 "num_base_bdevs": 2, 00:06:49.846 "num_base_bdevs_discovered": 0, 00:06:49.846 "num_base_bdevs_operational": 2, 00:06:49.846 "base_bdevs_list": [ 00:06:49.846 { 00:06:49.846 "name": "BaseBdev1", 00:06:49.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.846 "is_configured": false, 00:06:49.846 "data_offset": 0, 00:06:49.846 "data_size": 0 00:06:49.846 }, 00:06:49.846 { 00:06:49.846 "name": "BaseBdev2", 00:06:49.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.846 "is_configured": false, 00:06:49.846 "data_offset": 0, 00:06:49.846 "data_size": 0 00:06:49.846 } 00:06:49.846 ] 00:06:49.846 }' 00:06:49.846 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:49.846 02:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.106 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:50.106 [2024-07-25 02:31:36.947999] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:50.106 [2024-07-25 02:31:36.948018] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a2f0034500 name Existed_Raid, state configuring 00:06:50.106 02:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:50.365 [2024-07-25 02:31:37.112005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.365 [2024-07-25 02:31:37.112036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.365 [2024-07-25 02:31:37.112039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.365 [2024-07-25 02:31:37.112045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.366 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:50.625 [2024-07-25 02:31:37.300802] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:50.625 BaseBdev1 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:50.625 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:50.885 [ 00:06:50.885 { 00:06:50.885 "name": "BaseBdev1", 00:06:50.885 "aliases": [ 00:06:50.885 "044ed1c6-4a2e-11ef-9c8e-7947904e2597" 00:06:50.885 ], 00:06:50.885 "product_name": "Malloc disk", 00:06:50.885 "block_size": 512, 00:06:50.885 "num_blocks": 65536, 00:06:50.885 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:50.885 "assigned_rate_limits": { 00:06:50.885 "rw_ios_per_sec": 0, 00:06:50.885 "rw_mbytes_per_sec": 0, 00:06:50.885 "r_mbytes_per_sec": 0, 00:06:50.885 "w_mbytes_per_sec": 0 00:06:50.885 }, 00:06:50.885 "claimed": true, 00:06:50.885 "claim_type": "exclusive_write", 00:06:50.885 "zoned": false, 00:06:50.885 "supported_io_types": { 00:06:50.885 "read": true, 00:06:50.885 "write": true, 00:06:50.885 "unmap": true, 00:06:50.885 "flush": true, 00:06:50.885 "reset": true, 00:06:50.885 "nvme_admin": false, 00:06:50.885 "nvme_io": false, 00:06:50.885 "nvme_io_md": false, 00:06:50.885 "write_zeroes": true, 00:06:50.885 "zcopy": true, 00:06:50.885 "get_zone_info": false, 00:06:50.885 "zone_management": false, 00:06:50.885 "zone_append": false, 00:06:50.885 "compare": false, 00:06:50.885 "compare_and_write": false, 00:06:50.885 "abort": true, 00:06:50.885 "seek_hole": false, 00:06:50.885 "seek_data": false, 00:06:50.885 "copy": true, 00:06:50.885 "nvme_iov_md": false 00:06:50.885 }, 00:06:50.885 "memory_domains": [ 00:06:50.885 { 00:06:50.885 "dma_device_id": "system", 00:06:50.885 "dma_device_type": 1 00:06:50.885 }, 00:06:50.885 { 00:06:50.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.885 "dma_device_type": 2 00:06:50.885 } 00:06:50.885 ], 00:06:50.885 "driver_specific": {} 00:06:50.885 } 00:06:50.885 ] 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.885 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.145 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:51.145 "name": "Existed_Raid", 00:06:51.145 "uuid": "043221a6-4a2e-11ef-9c8e-7947904e2597", 00:06:51.145 "strip_size_kb": 64, 00:06:51.145 "state": "configuring", 00:06:51.145 "raid_level": "raid0", 00:06:51.145 "superblock": true, 00:06:51.145 "num_base_bdevs": 2, 00:06:51.145 "num_base_bdevs_discovered": 1, 00:06:51.145 "num_base_bdevs_operational": 2, 00:06:51.145 "base_bdevs_list": [ 00:06:51.145 { 00:06:51.145 "name": "BaseBdev1", 00:06:51.145 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:51.145 "is_configured": true, 00:06:51.145 "data_offset": 2048, 00:06:51.145 "data_size": 63488 00:06:51.145 }, 00:06:51.145 { 00:06:51.145 "name": "BaseBdev2", 00:06:51.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.145 "is_configured": false, 00:06:51.145 "data_offset": 0, 00:06:51.145 "data_size": 0 00:06:51.145 } 00:06:51.145 ] 00:06:51.145 }' 00:06:51.145 02:31:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:51.145 02:31:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.405 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:51.665 [2024-07-25 02:31:38.320016] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.665 [2024-07-25 02:31:38.320037] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a2f0034500 name Existed_Raid, state configuring 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:51.665 [2024-07-25 02:31:38.512032] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.665 [2024-07-25 02:31:38.512623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.665 [2024-07-25 02:31:38.512659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:51.665 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.924 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:51.924 "name": "Existed_Raid", 00:06:51.924 "uuid": "0507c211-4a2e-11ef-9c8e-7947904e2597", 00:06:51.924 "strip_size_kb": 64, 00:06:51.924 "state": "configuring", 00:06:51.924 "raid_level": "raid0", 00:06:51.924 "superblock": true, 00:06:51.924 "num_base_bdevs": 2, 00:06:51.924 "num_base_bdevs_discovered": 1, 00:06:51.924 "num_base_bdevs_operational": 2, 00:06:51.924 "base_bdevs_list": [ 00:06:51.924 { 00:06:51.924 "name": "BaseBdev1", 00:06:51.924 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:51.924 "is_configured": true, 00:06:51.924 "data_offset": 2048, 00:06:51.924 "data_size": 63488 00:06:51.924 }, 00:06:51.924 { 00:06:51.924 "name": "BaseBdev2", 00:06:51.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.924 "is_configured": false, 00:06:51.924 "data_offset": 0, 00:06:51.924 "data_size": 0 00:06:51.924 } 00:06:51.924 ] 00:06:51.924 }' 00:06:51.924 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:51.924 02:31:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.183 02:31:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:52.613 [2024-07-25 02:31:39.148134] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:52.613 [2024-07-25 02:31:39.148207] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a2f0034a00 00:06:52.613 [2024-07-25 02:31:39.148211] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:52.613 [2024-07-25 02:31:39.148229] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a2f0097e20 00:06:52.613 [2024-07-25 02:31:39.148260] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a2f0034a00 00:06:52.613 [2024-07-25 02:31:39.148264] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x12a2f0034a00 00:06:52.613 [2024-07-25 02:31:39.148279] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.613 BaseBdev2 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:52.613 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:52.872 [ 00:06:52.872 { 00:06:52.872 "name": "BaseBdev2", 00:06:52.872 "aliases": [ 00:06:52.872 "0568ce4b-4a2e-11ef-9c8e-7947904e2597" 00:06:52.872 ], 00:06:52.872 "product_name": "Malloc disk", 00:06:52.872 "block_size": 512, 00:06:52.872 "num_blocks": 65536, 00:06:52.872 "uuid": "0568ce4b-4a2e-11ef-9c8e-7947904e2597", 00:06:52.872 "assigned_rate_limits": { 00:06:52.872 "rw_ios_per_sec": 0, 00:06:52.872 "rw_mbytes_per_sec": 0, 00:06:52.872 "r_mbytes_per_sec": 0, 00:06:52.872 "w_mbytes_per_sec": 0 00:06:52.872 }, 00:06:52.872 "claimed": true, 00:06:52.872 "claim_type": "exclusive_write", 00:06:52.872 "zoned": false, 00:06:52.872 "supported_io_types": { 00:06:52.872 "read": true, 00:06:52.872 "write": true, 00:06:52.872 "unmap": true, 00:06:52.872 "flush": true, 00:06:52.872 "reset": true, 00:06:52.872 "nvme_admin": false, 00:06:52.872 "nvme_io": false, 00:06:52.872 "nvme_io_md": false, 00:06:52.872 "write_zeroes": true, 00:06:52.872 "zcopy": true, 00:06:52.872 "get_zone_info": false, 00:06:52.872 "zone_management": false, 00:06:52.872 "zone_append": false, 00:06:52.872 "compare": false, 00:06:52.872 "compare_and_write": false, 00:06:52.872 "abort": true, 00:06:52.872 "seek_hole": false, 00:06:52.872 "seek_data": false, 00:06:52.872 "copy": true, 00:06:52.872 "nvme_iov_md": false 00:06:52.872 }, 00:06:52.872 "memory_domains": [ 00:06:52.872 { 00:06:52.872 "dma_device_id": "system", 00:06:52.872 "dma_device_type": 1 00:06:52.872 }, 00:06:52.872 { 00:06:52.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.872 "dma_device_type": 2 00:06:52.872 } 00:06:52.872 ], 00:06:52.872 "driver_specific": {} 00:06:52.872 } 00:06:52.872 ] 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:52.872 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:52.873 "name": "Existed_Raid", 00:06:52.873 "uuid": "0507c211-4a2e-11ef-9c8e-7947904e2597", 00:06:52.873 "strip_size_kb": 64, 00:06:52.873 "state": "online", 00:06:52.873 "raid_level": "raid0", 00:06:52.873 "superblock": true, 00:06:52.873 "num_base_bdevs": 2, 00:06:52.873 "num_base_bdevs_discovered": 2, 00:06:52.873 "num_base_bdevs_operational": 2, 00:06:52.873 "base_bdevs_list": [ 00:06:52.873 { 00:06:52.873 "name": "BaseBdev1", 00:06:52.873 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:52.873 "is_configured": true, 00:06:52.873 "data_offset": 2048, 00:06:52.873 "data_size": 63488 00:06:52.873 }, 00:06:52.873 { 00:06:52.873 "name": "BaseBdev2", 00:06:52.873 "uuid": "0568ce4b-4a2e-11ef-9c8e-7947904e2597", 00:06:52.873 "is_configured": true, 00:06:52.873 "data_offset": 2048, 00:06:52.873 "data_size": 63488 00:06:52.873 } 00:06:52.873 ] 00:06:52.873 }' 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:52.873 02:31:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:53.131 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:53.393 [2024-07-25 02:31:40.180069] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:53.393 "name": "Existed_Raid", 00:06:53.393 "aliases": [ 00:06:53.393 "0507c211-4a2e-11ef-9c8e-7947904e2597" 00:06:53.393 ], 00:06:53.393 "product_name": "Raid Volume", 00:06:53.393 "block_size": 512, 00:06:53.393 "num_blocks": 126976, 00:06:53.393 "uuid": "0507c211-4a2e-11ef-9c8e-7947904e2597", 00:06:53.393 "assigned_rate_limits": { 00:06:53.393 "rw_ios_per_sec": 0, 00:06:53.393 "rw_mbytes_per_sec": 0, 00:06:53.393 "r_mbytes_per_sec": 0, 00:06:53.393 "w_mbytes_per_sec": 0 00:06:53.393 }, 00:06:53.393 "claimed": false, 00:06:53.393 "zoned": false, 00:06:53.393 "supported_io_types": { 00:06:53.393 "read": true, 00:06:53.393 "write": true, 00:06:53.393 "unmap": true, 00:06:53.393 "flush": true, 00:06:53.393 "reset": true, 00:06:53.393 "nvme_admin": false, 00:06:53.393 "nvme_io": false, 00:06:53.393 "nvme_io_md": false, 00:06:53.393 "write_zeroes": true, 00:06:53.393 "zcopy": false, 00:06:53.393 "get_zone_info": false, 00:06:53.393 "zone_management": false, 00:06:53.393 "zone_append": false, 00:06:53.393 "compare": false, 00:06:53.393 "compare_and_write": false, 00:06:53.393 "abort": false, 00:06:53.393 "seek_hole": false, 00:06:53.393 "seek_data": false, 00:06:53.393 "copy": false, 00:06:53.393 "nvme_iov_md": false 00:06:53.393 }, 00:06:53.393 "memory_domains": [ 00:06:53.393 { 00:06:53.393 "dma_device_id": "system", 00:06:53.393 "dma_device_type": 1 00:06:53.393 }, 00:06:53.393 { 00:06:53.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.393 "dma_device_type": 2 00:06:53.393 }, 00:06:53.393 { 00:06:53.393 "dma_device_id": "system", 00:06:53.393 "dma_device_type": 1 00:06:53.393 }, 00:06:53.393 { 00:06:53.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.393 "dma_device_type": 2 00:06:53.393 } 00:06:53.393 ], 00:06:53.393 "driver_specific": { 00:06:53.393 "raid": { 00:06:53.393 "uuid": "0507c211-4a2e-11ef-9c8e-7947904e2597", 00:06:53.393 "strip_size_kb": 64, 00:06:53.393 "state": "online", 00:06:53.393 "raid_level": "raid0", 00:06:53.393 "superblock": true, 00:06:53.393 "num_base_bdevs": 2, 00:06:53.393 "num_base_bdevs_discovered": 2, 00:06:53.393 "num_base_bdevs_operational": 2, 00:06:53.393 "base_bdevs_list": [ 00:06:53.393 { 00:06:53.393 "name": "BaseBdev1", 00:06:53.393 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:53.393 "is_configured": true, 00:06:53.393 "data_offset": 2048, 00:06:53.393 "data_size": 63488 00:06:53.393 }, 00:06:53.393 { 00:06:53.393 "name": "BaseBdev2", 00:06:53.393 "uuid": "0568ce4b-4a2e-11ef-9c8e-7947904e2597", 00:06:53.393 "is_configured": true, 00:06:53.393 "data_offset": 2048, 00:06:53.393 "data_size": 63488 00:06:53.393 } 00:06:53.393 ] 00:06:53.393 } 00:06:53.393 } 00:06:53.393 }' 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:53.393 BaseBdev2' 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:53.393 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:53.653 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:53.653 "name": "BaseBdev1", 00:06:53.653 "aliases": [ 00:06:53.653 "044ed1c6-4a2e-11ef-9c8e-7947904e2597" 00:06:53.653 ], 00:06:53.653 "product_name": "Malloc disk", 00:06:53.653 "block_size": 512, 00:06:53.653 "num_blocks": 65536, 00:06:53.653 "uuid": "044ed1c6-4a2e-11ef-9c8e-7947904e2597", 00:06:53.653 "assigned_rate_limits": { 00:06:53.653 "rw_ios_per_sec": 0, 00:06:53.653 "rw_mbytes_per_sec": 0, 00:06:53.653 "r_mbytes_per_sec": 0, 00:06:53.653 "w_mbytes_per_sec": 0 00:06:53.653 }, 00:06:53.653 "claimed": true, 00:06:53.653 "claim_type": "exclusive_write", 00:06:53.653 "zoned": false, 00:06:53.653 "supported_io_types": { 00:06:53.653 "read": true, 00:06:53.653 "write": true, 00:06:53.653 "unmap": true, 00:06:53.653 "flush": true, 00:06:53.653 "reset": true, 00:06:53.653 "nvme_admin": false, 00:06:53.653 "nvme_io": false, 00:06:53.653 "nvme_io_md": false, 00:06:53.653 "write_zeroes": true, 00:06:53.653 "zcopy": true, 00:06:53.653 "get_zone_info": false, 00:06:53.653 "zone_management": false, 00:06:53.653 "zone_append": false, 00:06:53.653 "compare": false, 00:06:53.653 "compare_and_write": false, 00:06:53.653 "abort": true, 00:06:53.653 "seek_hole": false, 00:06:53.653 "seek_data": false, 00:06:53.653 "copy": true, 00:06:53.653 "nvme_iov_md": false 00:06:53.653 }, 00:06:53.654 "memory_domains": [ 00:06:53.654 { 00:06:53.654 "dma_device_id": "system", 00:06:53.654 "dma_device_type": 1 00:06:53.654 }, 00:06:53.654 { 00:06:53.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.654 "dma_device_type": 2 00:06:53.654 } 00:06:53.654 ], 00:06:53.654 "driver_specific": {} 00:06:53.654 }' 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:53.654 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:53.913 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:53.913 "name": "BaseBdev2", 00:06:53.913 "aliases": [ 00:06:53.913 "0568ce4b-4a2e-11ef-9c8e-7947904e2597" 00:06:53.913 ], 00:06:53.913 "product_name": "Malloc disk", 00:06:53.913 "block_size": 512, 00:06:53.913 "num_blocks": 65536, 00:06:53.913 "uuid": "0568ce4b-4a2e-11ef-9c8e-7947904e2597", 00:06:53.913 "assigned_rate_limits": { 00:06:53.913 "rw_ios_per_sec": 0, 00:06:53.913 "rw_mbytes_per_sec": 0, 00:06:53.913 "r_mbytes_per_sec": 0, 00:06:53.913 "w_mbytes_per_sec": 0 00:06:53.913 }, 00:06:53.913 "claimed": true, 00:06:53.913 "claim_type": "exclusive_write", 00:06:53.913 "zoned": false, 00:06:53.913 "supported_io_types": { 00:06:53.913 "read": true, 00:06:53.913 "write": true, 00:06:53.913 "unmap": true, 00:06:53.913 "flush": true, 00:06:53.913 "reset": true, 00:06:53.913 "nvme_admin": false, 00:06:53.913 "nvme_io": false, 00:06:53.913 "nvme_io_md": false, 00:06:53.913 "write_zeroes": true, 00:06:53.913 "zcopy": true, 00:06:53.913 "get_zone_info": false, 00:06:53.913 "zone_management": false, 00:06:53.914 "zone_append": false, 00:06:53.914 "compare": false, 00:06:53.914 "compare_and_write": false, 00:06:53.914 "abort": true, 00:06:53.914 "seek_hole": false, 00:06:53.914 "seek_data": false, 00:06:53.914 "copy": true, 00:06:53.914 "nvme_iov_md": false 00:06:53.914 }, 00:06:53.914 "memory_domains": [ 00:06:53.914 { 00:06:53.914 "dma_device_id": "system", 00:06:53.914 "dma_device_type": 1 00:06:53.914 }, 00:06:53.914 { 00:06:53.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.914 "dma_device_type": 2 00:06:53.914 } 00:06:53.914 ], 00:06:53.914 "driver_specific": {} 00:06:53.914 }' 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:53.914 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:54.173 [2024-07-25 02:31:40.948062] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:54.173 [2024-07-25 02:31:40.948077] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.173 [2024-07-25 02:31:40.948086] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:54.173 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.174 02:31:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.433 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:54.433 "name": "Existed_Raid", 00:06:54.433 "uuid": "0507c211-4a2e-11ef-9c8e-7947904e2597", 00:06:54.433 "strip_size_kb": 64, 00:06:54.433 "state": "offline", 00:06:54.433 "raid_level": "raid0", 00:06:54.433 "superblock": true, 00:06:54.433 "num_base_bdevs": 2, 00:06:54.433 "num_base_bdevs_discovered": 1, 00:06:54.433 "num_base_bdevs_operational": 1, 00:06:54.433 "base_bdevs_list": [ 00:06:54.433 { 00:06:54.433 "name": null, 00:06:54.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.433 "is_configured": false, 00:06:54.433 "data_offset": 2048, 00:06:54.433 "data_size": 63488 00:06:54.433 }, 00:06:54.433 { 00:06:54.433 "name": "BaseBdev2", 00:06:54.433 "uuid": "0568ce4b-4a2e-11ef-9c8e-7947904e2597", 00:06:54.433 "is_configured": true, 00:06:54.433 "data_offset": 2048, 00:06:54.433 "data_size": 63488 00:06:54.433 } 00:06:54.433 ] 00:06:54.433 }' 00:06:54.433 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:54.433 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.692 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:54.692 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:54.692 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.692 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:54.951 [2024-07-25 02:31:41.776743] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:54.951 [2024-07-25 02:31:41.776760] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a2f0034a00 name Existed_Raid, state offline 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.951 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:55.210 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48917 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48917 ']' 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48917 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48917 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:06:55.211 killing process with pid 48917 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48917' 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48917 00:06:55.211 [2024-07-25 02:31:41.995610] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.211 [2024-07-25 02:31:41.995645] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.211 02:31:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48917 00:06:55.470 02:31:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:06:55.470 00:06:55.470 real 0m6.926s 00:06:55.470 user 0m11.837s 00:06:55.470 sys 0m1.349s 00:06:55.470 02:31:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.471 02:31:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.471 ************************************ 00:06:55.471 END TEST raid_state_function_test_sb 00:06:55.471 ************************************ 00:06:55.471 02:31:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:06:55.471 02:31:42 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:55.471 02:31:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:55.471 02:31:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.471 02:31:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.471 ************************************ 00:06:55.471 START TEST raid_superblock_test 00:06:55.471 ************************************ 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49183 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49183 /var/tmp/spdk-raid.sock 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49183 ']' 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.471 02:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.471 [2024-07-25 02:31:42.229390] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:06:55.471 [2024-07-25 02:31:42.229604] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:56.040 EAL: TSC is not safe to use in SMP mode 00:06:56.040 EAL: TSC is not invariant 00:06:56.040 [2024-07-25 02:31:42.644698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.040 [2024-07-25 02:31:42.737091] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:56.040 [2024-07-25 02:31:42.738757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.040 [2024-07-25 02:31:42.739411] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.040 [2024-07-25 02:31:42.739421] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:56.299 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:56.558 malloc1 00:06:56.558 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:56.817 [2024-07-25 02:31:43.454409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:56.817 [2024-07-25 02:31:43.454449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.817 [2024-07-25 02:31:43.454456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37575434780 00:06:56.817 [2024-07-25 02:31:43.454461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.817 [2024-07-25 02:31:43.455184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.817 [2024-07-25 02:31:43.455210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:56.817 pt1 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:56.817 malloc2 00:06:56.817 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:57.076 [2024-07-25 02:31:43.802421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:57.076 [2024-07-25 02:31:43.802459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.076 [2024-07-25 02:31:43.802466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37575434c80 00:06:57.076 [2024-07-25 02:31:43.802472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.076 [2024-07-25 02:31:43.802991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.076 [2024-07-25 02:31:43.803017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:57.076 pt2 00:06:57.076 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:06:57.076 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:57.076 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:06:57.335 [2024-07-25 02:31:43.986424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:57.335 [2024-07-25 02:31:43.986875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:57.335 [2024-07-25 02:31:43.986925] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x37575434f00 00:06:57.335 [2024-07-25 02:31:43.986931] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.336 [2024-07-25 02:31:43.986973] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37575497e20 00:06:57.336 [2024-07-25 02:31:43.987026] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x37575434f00 00:06:57.336 [2024-07-25 02:31:43.987029] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x37575434f00 00:06:57.336 [2024-07-25 02:31:43.987048] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.336 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:57.336 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:06:57.336 02:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:57.336 "name": "raid_bdev1", 00:06:57.336 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:06:57.336 "strip_size_kb": 64, 00:06:57.336 "state": "online", 00:06:57.336 "raid_level": "raid0", 00:06:57.336 "superblock": true, 00:06:57.336 "num_base_bdevs": 2, 00:06:57.336 "num_base_bdevs_discovered": 2, 00:06:57.336 "num_base_bdevs_operational": 2, 00:06:57.336 "base_bdevs_list": [ 00:06:57.336 { 00:06:57.336 "name": "pt1", 00:06:57.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.336 "is_configured": true, 00:06:57.336 "data_offset": 2048, 00:06:57.336 "data_size": 63488 00:06:57.336 }, 00:06:57.336 { 00:06:57.336 "name": "pt2", 00:06:57.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.336 "is_configured": true, 00:06:57.336 "data_offset": 2048, 00:06:57.336 "data_size": 63488 00:06:57.336 } 00:06:57.336 ] 00:06:57.336 }' 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:57.336 02:31:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:57.595 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:57.854 [2024-07-25 02:31:44.642469] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.854 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:57.854 "name": "raid_bdev1", 00:06:57.854 "aliases": [ 00:06:57.854 "084b157e-4a2e-11ef-9c8e-7947904e2597" 00:06:57.854 ], 00:06:57.854 "product_name": "Raid Volume", 00:06:57.854 "block_size": 512, 00:06:57.854 "num_blocks": 126976, 00:06:57.854 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:06:57.854 "assigned_rate_limits": { 00:06:57.854 "rw_ios_per_sec": 0, 00:06:57.854 "rw_mbytes_per_sec": 0, 00:06:57.854 "r_mbytes_per_sec": 0, 00:06:57.854 "w_mbytes_per_sec": 0 00:06:57.854 }, 00:06:57.854 "claimed": false, 00:06:57.854 "zoned": false, 00:06:57.854 "supported_io_types": { 00:06:57.854 "read": true, 00:06:57.854 "write": true, 00:06:57.854 "unmap": true, 00:06:57.854 "flush": true, 00:06:57.854 "reset": true, 00:06:57.855 "nvme_admin": false, 00:06:57.855 "nvme_io": false, 00:06:57.855 "nvme_io_md": false, 00:06:57.855 "write_zeroes": true, 00:06:57.855 "zcopy": false, 00:06:57.855 "get_zone_info": false, 00:06:57.855 "zone_management": false, 00:06:57.855 "zone_append": false, 00:06:57.855 "compare": false, 00:06:57.855 "compare_and_write": false, 00:06:57.855 "abort": false, 00:06:57.855 "seek_hole": false, 00:06:57.855 "seek_data": false, 00:06:57.855 "copy": false, 00:06:57.855 "nvme_iov_md": false 00:06:57.855 }, 00:06:57.855 "memory_domains": [ 00:06:57.855 { 00:06:57.855 "dma_device_id": "system", 00:06:57.855 "dma_device_type": 1 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.855 "dma_device_type": 2 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "dma_device_id": "system", 00:06:57.855 "dma_device_type": 1 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.855 "dma_device_type": 2 00:06:57.855 } 00:06:57.855 ], 00:06:57.855 "driver_specific": { 00:06:57.855 "raid": { 00:06:57.855 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:06:57.855 "strip_size_kb": 64, 00:06:57.855 "state": "online", 00:06:57.855 "raid_level": "raid0", 00:06:57.855 "superblock": true, 00:06:57.855 "num_base_bdevs": 2, 00:06:57.855 "num_base_bdevs_discovered": 2, 00:06:57.855 "num_base_bdevs_operational": 2, 00:06:57.855 "base_bdevs_list": [ 00:06:57.855 { 00:06:57.855 "name": "pt1", 00:06:57.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.855 "is_configured": true, 00:06:57.855 "data_offset": 2048, 00:06:57.855 "data_size": 63488 00:06:57.855 }, 00:06:57.855 { 00:06:57.855 "name": "pt2", 00:06:57.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.855 "is_configured": true, 00:06:57.855 "data_offset": 2048, 00:06:57.855 "data_size": 63488 00:06:57.855 } 00:06:57.855 ] 00:06:57.855 } 00:06:57.855 } 00:06:57.855 }' 00:06:57.855 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.855 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:06:57.855 pt2' 00:06:57.855 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:57.855 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:06:57.855 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:58.114 "name": "pt1", 00:06:58.114 "aliases": [ 00:06:58.114 "00000000-0000-0000-0000-000000000001" 00:06:58.114 ], 00:06:58.114 "product_name": "passthru", 00:06:58.114 "block_size": 512, 00:06:58.114 "num_blocks": 65536, 00:06:58.114 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.114 "assigned_rate_limits": { 00:06:58.114 "rw_ios_per_sec": 0, 00:06:58.114 "rw_mbytes_per_sec": 0, 00:06:58.114 "r_mbytes_per_sec": 0, 00:06:58.114 "w_mbytes_per_sec": 0 00:06:58.114 }, 00:06:58.114 "claimed": true, 00:06:58.114 "claim_type": "exclusive_write", 00:06:58.114 "zoned": false, 00:06:58.114 "supported_io_types": { 00:06:58.114 "read": true, 00:06:58.114 "write": true, 00:06:58.114 "unmap": true, 00:06:58.114 "flush": true, 00:06:58.114 "reset": true, 00:06:58.114 "nvme_admin": false, 00:06:58.114 "nvme_io": false, 00:06:58.114 "nvme_io_md": false, 00:06:58.114 "write_zeroes": true, 00:06:58.114 "zcopy": true, 00:06:58.114 "get_zone_info": false, 00:06:58.114 "zone_management": false, 00:06:58.114 "zone_append": false, 00:06:58.114 "compare": false, 00:06:58.114 "compare_and_write": false, 00:06:58.114 "abort": true, 00:06:58.114 "seek_hole": false, 00:06:58.114 "seek_data": false, 00:06:58.114 "copy": true, 00:06:58.114 "nvme_iov_md": false 00:06:58.114 }, 00:06:58.114 "memory_domains": [ 00:06:58.114 { 00:06:58.114 "dma_device_id": "system", 00:06:58.114 "dma_device_type": 1 00:06:58.114 }, 00:06:58.114 { 00:06:58.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.114 "dma_device_type": 2 00:06:58.114 } 00:06:58.114 ], 00:06:58.114 "driver_specific": { 00:06:58.114 "passthru": { 00:06:58.114 "name": "pt1", 00:06:58.114 "base_bdev_name": "malloc1" 00:06:58.114 } 00:06:58.114 } 00:06:58.114 }' 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:06:58.114 02:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:58.373 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:58.373 "name": "pt2", 00:06:58.373 "aliases": [ 00:06:58.373 "00000000-0000-0000-0000-000000000002" 00:06:58.373 ], 00:06:58.373 "product_name": "passthru", 00:06:58.373 "block_size": 512, 00:06:58.373 "num_blocks": 65536, 00:06:58.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.373 "assigned_rate_limits": { 00:06:58.373 "rw_ios_per_sec": 0, 00:06:58.373 "rw_mbytes_per_sec": 0, 00:06:58.373 "r_mbytes_per_sec": 0, 00:06:58.374 "w_mbytes_per_sec": 0 00:06:58.374 }, 00:06:58.374 "claimed": true, 00:06:58.374 "claim_type": "exclusive_write", 00:06:58.374 "zoned": false, 00:06:58.374 "supported_io_types": { 00:06:58.374 "read": true, 00:06:58.374 "write": true, 00:06:58.374 "unmap": true, 00:06:58.374 "flush": true, 00:06:58.374 "reset": true, 00:06:58.374 "nvme_admin": false, 00:06:58.374 "nvme_io": false, 00:06:58.374 "nvme_io_md": false, 00:06:58.374 "write_zeroes": true, 00:06:58.374 "zcopy": true, 00:06:58.374 "get_zone_info": false, 00:06:58.374 "zone_management": false, 00:06:58.374 "zone_append": false, 00:06:58.374 "compare": false, 00:06:58.374 "compare_and_write": false, 00:06:58.374 "abort": true, 00:06:58.374 "seek_hole": false, 00:06:58.374 "seek_data": false, 00:06:58.374 "copy": true, 00:06:58.374 "nvme_iov_md": false 00:06:58.374 }, 00:06:58.374 "memory_domains": [ 00:06:58.374 { 00:06:58.374 "dma_device_id": "system", 00:06:58.374 "dma_device_type": 1 00:06:58.374 }, 00:06:58.374 { 00:06:58.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.374 "dma_device_type": 2 00:06:58.374 } 00:06:58.374 ], 00:06:58.374 "driver_specific": { 00:06:58.374 "passthru": { 00:06:58.374 "name": "pt2", 00:06:58.374 "base_bdev_name": "malloc2" 00:06:58.374 } 00:06:58.374 } 00:06:58.374 }' 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:58.374 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:06:58.632 [2024-07-25 02:31:45.386478] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.632 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=084b157e-4a2e-11ef-9c8e-7947904e2597 00:06:58.632 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 084b157e-4a2e-11ef-9c8e-7947904e2597 ']' 00:06:58.632 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:58.892 [2024-07-25 02:31:45.570458] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:58.892 [2024-07-25 02:31:45.570472] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.892 [2024-07-25 02:31:45.570485] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.892 [2024-07-25 02:31:45.570493] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.892 [2024-07-25 02:31:45.570497] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37575434f00 name raid_bdev1, state offline 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:06:58.892 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:59.151 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:06:59.151 02:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:59.411 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:59.411 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.670 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:59.671 [2024-07-25 02:31:46.506499] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:59.671 [2024-07-25 02:31:46.506987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:59.671 [2024-07-25 02:31:46.507010] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:59.671 [2024-07-25 02:31:46.507034] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:59.671 [2024-07-25 02:31:46.507054] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:59.671 [2024-07-25 02:31:46.507057] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37575434c80 name raid_bdev1, state configuring 00:06:59.671 request: 00:06:59.671 { 00:06:59.671 "name": "raid_bdev1", 00:06:59.671 "raid_level": "raid0", 00:06:59.671 "base_bdevs": [ 00:06:59.671 "malloc1", 00:06:59.671 "malloc2" 00:06:59.671 ], 00:06:59.671 "strip_size_kb": 64, 00:06:59.671 "superblock": false, 00:06:59.671 "method": "bdev_raid_create", 00:06:59.671 "req_id": 1 00:06:59.671 } 00:06:59.671 Got JSON-RPC error response 00:06:59.671 response: 00:06:59.671 { 00:06:59.671 "code": -17, 00:06:59.671 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:59.671 } 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:06:59.671 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:59.930 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:06:59.930 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:06:59.930 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.189 [2024-07-25 02:31:46.878503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.189 [2024-07-25 02:31:46.878539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.189 [2024-07-25 02:31:46.878546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37575434780 00:07:00.189 [2024-07-25 02:31:46.878552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.189 [2024-07-25 02:31:46.879092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.189 [2024-07-25 02:31:46.879119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.189 [2024-07-25 02:31:46.879136] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:00.189 [2024-07-25 02:31:46.879146] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.189 pt1 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.189 02:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.449 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:00.449 "name": "raid_bdev1", 00:07:00.449 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:07:00.449 "strip_size_kb": 64, 00:07:00.449 "state": "configuring", 00:07:00.449 "raid_level": "raid0", 00:07:00.449 "superblock": true, 00:07:00.449 "num_base_bdevs": 2, 00:07:00.449 "num_base_bdevs_discovered": 1, 00:07:00.449 "num_base_bdevs_operational": 2, 00:07:00.449 "base_bdevs_list": [ 00:07:00.449 { 00:07:00.449 "name": "pt1", 00:07:00.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.449 "is_configured": true, 00:07:00.449 "data_offset": 2048, 00:07:00.449 "data_size": 63488 00:07:00.449 }, 00:07:00.449 { 00:07:00.449 "name": null, 00:07:00.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.449 "is_configured": false, 00:07:00.449 "data_offset": 2048, 00:07:00.449 "data_size": 63488 00:07:00.449 } 00:07:00.449 ] 00:07:00.449 }' 00:07:00.449 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:00.449 02:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.709 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:00.709 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:00.709 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:00.709 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.709 [2024-07-25 02:31:47.538510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.709 [2024-07-25 02:31:47.538542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.709 [2024-07-25 02:31:47.538566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x37575434f00 00:07:00.709 [2024-07-25 02:31:47.538572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.709 [2024-07-25 02:31:47.538647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.709 [2024-07-25 02:31:47.538653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.709 [2024-07-25 02:31:47.538668] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:00.709 [2024-07-25 02:31:47.538674] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.709 [2024-07-25 02:31:47.538692] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x37575435180 00:07:00.709 [2024-07-25 02:31:47.538695] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.710 [2024-07-25 02:31:47.538710] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37575497e20 00:07:00.710 [2024-07-25 02:31:47.538744] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x37575435180 00:07:00.710 [2024-07-25 02:31:47.538747] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x37575435180 00:07:00.710 [2024-07-25 02:31:47.538762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.710 pt2 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:00.710 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.969 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:00.969 "name": "raid_bdev1", 00:07:00.969 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:07:00.969 "strip_size_kb": 64, 00:07:00.969 "state": "online", 00:07:00.969 "raid_level": "raid0", 00:07:00.969 "superblock": true, 00:07:00.969 "num_base_bdevs": 2, 00:07:00.969 "num_base_bdevs_discovered": 2, 00:07:00.969 "num_base_bdevs_operational": 2, 00:07:00.969 "base_bdevs_list": [ 00:07:00.969 { 00:07:00.969 "name": "pt1", 00:07:00.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.970 "is_configured": true, 00:07:00.970 "data_offset": 2048, 00:07:00.970 "data_size": 63488 00:07:00.970 }, 00:07:00.970 { 00:07:00.970 "name": "pt2", 00:07:00.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.970 "is_configured": true, 00:07:00.970 "data_offset": 2048, 00:07:00.970 "data_size": 63488 00:07:00.970 } 00:07:00.970 ] 00:07:00.970 }' 00:07:00.970 02:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:00.970 02:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:01.242 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:01.594 [2024-07-25 02:31:48.186547] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.594 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:01.594 "name": "raid_bdev1", 00:07:01.594 "aliases": [ 00:07:01.594 "084b157e-4a2e-11ef-9c8e-7947904e2597" 00:07:01.594 ], 00:07:01.594 "product_name": "Raid Volume", 00:07:01.594 "block_size": 512, 00:07:01.594 "num_blocks": 126976, 00:07:01.594 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:07:01.594 "assigned_rate_limits": { 00:07:01.594 "rw_ios_per_sec": 0, 00:07:01.594 "rw_mbytes_per_sec": 0, 00:07:01.594 "r_mbytes_per_sec": 0, 00:07:01.594 "w_mbytes_per_sec": 0 00:07:01.594 }, 00:07:01.594 "claimed": false, 00:07:01.594 "zoned": false, 00:07:01.594 "supported_io_types": { 00:07:01.594 "read": true, 00:07:01.594 "write": true, 00:07:01.594 "unmap": true, 00:07:01.594 "flush": true, 00:07:01.594 "reset": true, 00:07:01.594 "nvme_admin": false, 00:07:01.594 "nvme_io": false, 00:07:01.594 "nvme_io_md": false, 00:07:01.594 "write_zeroes": true, 00:07:01.594 "zcopy": false, 00:07:01.594 "get_zone_info": false, 00:07:01.594 "zone_management": false, 00:07:01.594 "zone_append": false, 00:07:01.594 "compare": false, 00:07:01.594 "compare_and_write": false, 00:07:01.594 "abort": false, 00:07:01.594 "seek_hole": false, 00:07:01.594 "seek_data": false, 00:07:01.594 "copy": false, 00:07:01.594 "nvme_iov_md": false 00:07:01.594 }, 00:07:01.594 "memory_domains": [ 00:07:01.594 { 00:07:01.594 "dma_device_id": "system", 00:07:01.594 "dma_device_type": 1 00:07:01.594 }, 00:07:01.594 { 00:07:01.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.594 "dma_device_type": 2 00:07:01.594 }, 00:07:01.594 { 00:07:01.594 "dma_device_id": "system", 00:07:01.594 "dma_device_type": 1 00:07:01.594 }, 00:07:01.594 { 00:07:01.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.594 "dma_device_type": 2 00:07:01.594 } 00:07:01.594 ], 00:07:01.594 "driver_specific": { 00:07:01.594 "raid": { 00:07:01.594 "uuid": "084b157e-4a2e-11ef-9c8e-7947904e2597", 00:07:01.595 "strip_size_kb": 64, 00:07:01.595 "state": "online", 00:07:01.595 "raid_level": "raid0", 00:07:01.595 "superblock": true, 00:07:01.595 "num_base_bdevs": 2, 00:07:01.595 "num_base_bdevs_discovered": 2, 00:07:01.595 "num_base_bdevs_operational": 2, 00:07:01.595 "base_bdevs_list": [ 00:07:01.595 { 00:07:01.595 "name": "pt1", 00:07:01.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.595 "is_configured": true, 00:07:01.595 "data_offset": 2048, 00:07:01.595 "data_size": 63488 00:07:01.595 }, 00:07:01.595 { 00:07:01.595 "name": "pt2", 00:07:01.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.595 "is_configured": true, 00:07:01.595 "data_offset": 2048, 00:07:01.595 "data_size": 63488 00:07:01.595 } 00:07:01.595 ] 00:07:01.595 } 00:07:01.595 } 00:07:01.595 }' 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:01.595 pt2' 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:01.595 "name": "pt1", 00:07:01.595 "aliases": [ 00:07:01.595 "00000000-0000-0000-0000-000000000001" 00:07:01.595 ], 00:07:01.595 "product_name": "passthru", 00:07:01.595 "block_size": 512, 00:07:01.595 "num_blocks": 65536, 00:07:01.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.595 "assigned_rate_limits": { 00:07:01.595 "rw_ios_per_sec": 0, 00:07:01.595 "rw_mbytes_per_sec": 0, 00:07:01.595 "r_mbytes_per_sec": 0, 00:07:01.595 "w_mbytes_per_sec": 0 00:07:01.595 }, 00:07:01.595 "claimed": true, 00:07:01.595 "claim_type": "exclusive_write", 00:07:01.595 "zoned": false, 00:07:01.595 "supported_io_types": { 00:07:01.595 "read": true, 00:07:01.595 "write": true, 00:07:01.595 "unmap": true, 00:07:01.595 "flush": true, 00:07:01.595 "reset": true, 00:07:01.595 "nvme_admin": false, 00:07:01.595 "nvme_io": false, 00:07:01.595 "nvme_io_md": false, 00:07:01.595 "write_zeroes": true, 00:07:01.595 "zcopy": true, 00:07:01.595 "get_zone_info": false, 00:07:01.595 "zone_management": false, 00:07:01.595 "zone_append": false, 00:07:01.595 "compare": false, 00:07:01.595 "compare_and_write": false, 00:07:01.595 "abort": true, 00:07:01.595 "seek_hole": false, 00:07:01.595 "seek_data": false, 00:07:01.595 "copy": true, 00:07:01.595 "nvme_iov_md": false 00:07:01.595 }, 00:07:01.595 "memory_domains": [ 00:07:01.595 { 00:07:01.595 "dma_device_id": "system", 00:07:01.595 "dma_device_type": 1 00:07:01.595 }, 00:07:01.595 { 00:07:01.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.595 "dma_device_type": 2 00:07:01.595 } 00:07:01.595 ], 00:07:01.595 "driver_specific": { 00:07:01.595 "passthru": { 00:07:01.595 "name": "pt1", 00:07:01.595 "base_bdev_name": "malloc1" 00:07:01.595 } 00:07:01.595 } 00:07:01.595 }' 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.595 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:01.855 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:01.855 "name": "pt2", 00:07:01.855 "aliases": [ 00:07:01.855 "00000000-0000-0000-0000-000000000002" 00:07:01.855 ], 00:07:01.855 "product_name": "passthru", 00:07:01.855 "block_size": 512, 00:07:01.855 "num_blocks": 65536, 00:07:01.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.855 "assigned_rate_limits": { 00:07:01.856 "rw_ios_per_sec": 0, 00:07:01.856 "rw_mbytes_per_sec": 0, 00:07:01.856 "r_mbytes_per_sec": 0, 00:07:01.856 "w_mbytes_per_sec": 0 00:07:01.856 }, 00:07:01.856 "claimed": true, 00:07:01.856 "claim_type": "exclusive_write", 00:07:01.856 "zoned": false, 00:07:01.856 "supported_io_types": { 00:07:01.856 "read": true, 00:07:01.856 "write": true, 00:07:01.856 "unmap": true, 00:07:01.856 "flush": true, 00:07:01.856 "reset": true, 00:07:01.856 "nvme_admin": false, 00:07:01.856 "nvme_io": false, 00:07:01.856 "nvme_io_md": false, 00:07:01.856 "write_zeroes": true, 00:07:01.856 "zcopy": true, 00:07:01.856 "get_zone_info": false, 00:07:01.856 "zone_management": false, 00:07:01.856 "zone_append": false, 00:07:01.856 "compare": false, 00:07:01.856 "compare_and_write": false, 00:07:01.856 "abort": true, 00:07:01.856 "seek_hole": false, 00:07:01.856 "seek_data": false, 00:07:01.856 "copy": true, 00:07:01.856 "nvme_iov_md": false 00:07:01.856 }, 00:07:01.856 "memory_domains": [ 00:07:01.856 { 00:07:01.856 "dma_device_id": "system", 00:07:01.856 "dma_device_type": 1 00:07:01.856 }, 00:07:01.856 { 00:07:01.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.856 "dma_device_type": 2 00:07:01.856 } 00:07:01.856 ], 00:07:01.856 "driver_specific": { 00:07:01.856 "passthru": { 00:07:01.856 "name": "pt2", 00:07:01.856 "base_bdev_name": "malloc2" 00:07:01.856 } 00:07:01.856 } 00:07:01.856 }' 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.856 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:02.116 [2024-07-25 02:31:48.946542] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 084b157e-4a2e-11ef-9c8e-7947904e2597 '!=' 084b157e-4a2e-11ef-9c8e-7947904e2597 ']' 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49183 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49183 ']' 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49183 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49183 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:02.116 killing process with pid 49183 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49183' 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49183 00:07:02.116 [2024-07-25 02:31:48.977424] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.116 [2024-07-25 02:31:48.977439] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.116 [2024-07-25 02:31:48.977457] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.116 [2024-07-25 02:31:48.977461] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37575435180 name raid_bdev1, state offline 00:07:02.116 02:31:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49183 00:07:02.116 [2024-07-25 02:31:48.986775] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.377 02:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:02.377 00:07:02.377 real 0m6.936s 00:07:02.377 user 0m11.930s 00:07:02.377 sys 0m1.288s 00:07:02.377 02:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.377 02:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.377 ************************************ 00:07:02.377 END TEST raid_superblock_test 00:07:02.377 ************************************ 00:07:02.377 02:31:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:02.377 02:31:49 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:02.377 02:31:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:02.377 02:31:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.377 02:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.377 ************************************ 00:07:02.377 START TEST raid_read_error_test 00:07:02.377 ************************************ 00:07:02.377 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.eyyi1iyHPA 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49440 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49440 /var/tmp/spdk-raid.sock 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49440 ']' 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.378 02:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.378 [2024-07-25 02:31:49.227034] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:02.378 [2024-07-25 02:31:49.227359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:02.949 EAL: TSC is not safe to use in SMP mode 00:07:02.949 EAL: TSC is not invariant 00:07:02.949 [2024-07-25 02:31:49.648272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.949 [2024-07-25 02:31:49.740177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:02.949 [2024-07-25 02:31:49.741892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.949 [2024-07-25 02:31:49.742528] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.949 [2024-07-25 02:31:49.742538] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.518 02:31:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.518 02:31:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:03.518 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:03.518 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:03.518 BaseBdev1_malloc 00:07:03.518 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:03.777 true 00:07:03.777 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:03.777 [2024-07-25 02:31:50.641407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:03.777 [2024-07-25 02:31:50.641456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.777 [2024-07-25 02:31:50.641478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e618434780 00:07:03.777 [2024-07-25 02:31:50.641485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.777 [2024-07-25 02:31:50.641947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.777 [2024-07-25 02:31:50.641976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:03.777 BaseBdev1 00:07:04.036 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:04.036 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:04.036 BaseBdev2_malloc 00:07:04.036 02:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:04.295 true 00:07:04.295 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:04.554 [2024-07-25 02:31:51.225411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:04.554 [2024-07-25 02:31:51.225468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.554 [2024-07-25 02:31:51.225490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4e618434c80 00:07:04.554 [2024-07-25 02:31:51.225495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.554 [2024-07-25 02:31:51.225941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.554 [2024-07-25 02:31:51.225966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:04.554 BaseBdev2 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:04.554 [2024-07-25 02:31:51.397417] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.554 [2024-07-25 02:31:51.397797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.554 [2024-07-25 02:31:51.397870] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x4e618434f00 00:07:04.554 [2024-07-25 02:31:51.397875] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.554 [2024-07-25 02:31:51.397901] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x4e6184a0e20 00:07:04.554 [2024-07-25 02:31:51.397950] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x4e618434f00 00:07:04.554 [2024-07-25 02:31:51.397953] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x4e618434f00 00:07:04.554 [2024-07-25 02:31:51.397971] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.554 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.813 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.813 "name": "raid_bdev1", 00:07:04.813 "uuid": "0cb5e973-4a2e-11ef-9c8e-7947904e2597", 00:07:04.813 "strip_size_kb": 64, 00:07:04.813 "state": "online", 00:07:04.813 "raid_level": "raid0", 00:07:04.813 "superblock": true, 00:07:04.813 "num_base_bdevs": 2, 00:07:04.813 "num_base_bdevs_discovered": 2, 00:07:04.813 "num_base_bdevs_operational": 2, 00:07:04.813 "base_bdevs_list": [ 00:07:04.813 { 00:07:04.813 "name": "BaseBdev1", 00:07:04.813 "uuid": "883182f5-524f-5a5d-8499-7b64aecdef69", 00:07:04.813 "is_configured": true, 00:07:04.813 "data_offset": 2048, 00:07:04.813 "data_size": 63488 00:07:04.813 }, 00:07:04.813 { 00:07:04.813 "name": "BaseBdev2", 00:07:04.813 "uuid": "1aeeca3e-92ca-4a5b-94d6-9ed87854ec02", 00:07:04.813 "is_configured": true, 00:07:04.813 "data_offset": 2048, 00:07:04.813 "data_size": 63488 00:07:04.813 } 00:07:04.813 ] 00:07:04.813 }' 00:07:04.813 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.813 02:31:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.072 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:05.072 02:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:05.072 [2024-07-25 02:31:51.945480] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x4e6184a0ec0 00:07:06.453 02:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:06.453 "name": "raid_bdev1", 00:07:06.453 "uuid": "0cb5e973-4a2e-11ef-9c8e-7947904e2597", 00:07:06.453 "strip_size_kb": 64, 00:07:06.453 "state": "online", 00:07:06.453 "raid_level": "raid0", 00:07:06.453 "superblock": true, 00:07:06.453 "num_base_bdevs": 2, 00:07:06.453 "num_base_bdevs_discovered": 2, 00:07:06.453 "num_base_bdevs_operational": 2, 00:07:06.453 "base_bdevs_list": [ 00:07:06.453 { 00:07:06.453 "name": "BaseBdev1", 00:07:06.453 "uuid": "883182f5-524f-5a5d-8499-7b64aecdef69", 00:07:06.453 "is_configured": true, 00:07:06.453 "data_offset": 2048, 00:07:06.453 "data_size": 63488 00:07:06.453 }, 00:07:06.453 { 00:07:06.453 "name": "BaseBdev2", 00:07:06.453 "uuid": "1aeeca3e-92ca-4a5b-94d6-9ed87854ec02", 00:07:06.453 "is_configured": true, 00:07:06.453 "data_offset": 2048, 00:07:06.453 "data_size": 63488 00:07:06.453 } 00:07:06.453 ] 00:07:06.453 }' 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:06.453 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.713 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:06.972 [2024-07-25 02:31:53.769759] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.972 [2024-07-25 02:31:53.769785] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.972 [2024-07-25 02:31:53.770072] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.972 [2024-07-25 02:31:53.770079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.972 [2024-07-25 02:31:53.770084] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.972 [2024-07-25 02:31:53.770087] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x4e618434f00 name raid_bdev1, state offline 00:07:06.972 0 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49440 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49440 ']' 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49440 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49440 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:06.972 killing process with pid 49440 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49440' 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49440 00:07:06.972 [2024-07-25 02:31:53.801380] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.972 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49440 00:07:06.972 [2024-07-25 02:31:53.810558] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.eyyi1iyHPA 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.55 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.55 != \0\.\0\0 ]] 00:07:07.233 00:07:07.233 real 0m4.778s 00:07:07.233 user 0m6.943s 00:07:07.233 sys 0m0.936s 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.233 02:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 END TEST raid_read_error_test 00:07:07.233 ************************************ 00:07:07.233 02:31:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:07.233 02:31:54 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:07.233 02:31:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:07.233 02:31:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.233 02:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 ************************************ 00:07:07.233 START TEST raid_write_error_test 00:07:07.233 ************************************ 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.6XsjaLMjpQ 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49564 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49564 /var/tmp/spdk-raid.sock 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49564 ']' 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.233 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.233 [2024-07-25 02:31:54.060522] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:07.233 [2024-07-25 02:31:54.060860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:07.801 EAL: TSC is not safe to use in SMP mode 00:07:07.801 EAL: TSC is not invariant 00:07:07.801 [2024-07-25 02:31:54.479009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.801 [2024-07-25 02:31:54.572638] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:07.801 [2024-07-25 02:31:54.574305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.801 [2024-07-25 02:31:54.574931] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.801 [2024-07-25 02:31:54.574943] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.370 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.370 02:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:08.370 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:08.370 02:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:08.370 BaseBdev1_malloc 00:07:08.370 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:08.630 true 00:07:08.630 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:08.630 [2024-07-25 02:31:55.501823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:08.630 [2024-07-25 02:31:55.501874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.630 [2024-07-25 02:31:55.501894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x393af7434780 00:07:08.630 [2024-07-25 02:31:55.501916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.630 [2024-07-25 02:31:55.502371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.630 [2024-07-25 02:31:55.502397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:08.630 BaseBdev1 00:07:08.888 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:08.888 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:08.888 BaseBdev2_malloc 00:07:08.888 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:09.148 true 00:07:09.148 02:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.428 [2024-07-25 02:31:56.065832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.428 [2024-07-25 02:31:56.065890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.428 [2024-07-25 02:31:56.065912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x393af7434c80 00:07:09.428 [2024-07-25 02:31:56.065919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.428 [2024-07-25 02:31:56.066365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.428 [2024-07-25 02:31:56.066418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.428 BaseBdev2 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:09.428 [2024-07-25 02:31:56.253837] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.428 [2024-07-25 02:31:56.254214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.428 [2024-07-25 02:31:56.254268] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x393af7434f00 00:07:09.428 [2024-07-25 02:31:56.254277] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.428 [2024-07-25 02:31:56.254319] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x393af74a0e20 00:07:09.428 [2024-07-25 02:31:56.254373] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x393af7434f00 00:07:09.428 [2024-07-25 02:31:56.254380] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x393af7434f00 00:07:09.428 [2024-07-25 02:31:56.254398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.428 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.687 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:09.687 "name": "raid_bdev1", 00:07:09.687 "uuid": "0f9af152-4a2e-11ef-9c8e-7947904e2597", 00:07:09.687 "strip_size_kb": 64, 00:07:09.687 "state": "online", 00:07:09.687 "raid_level": "raid0", 00:07:09.687 "superblock": true, 00:07:09.687 "num_base_bdevs": 2, 00:07:09.687 "num_base_bdevs_discovered": 2, 00:07:09.687 "num_base_bdevs_operational": 2, 00:07:09.687 "base_bdevs_list": [ 00:07:09.687 { 00:07:09.687 "name": "BaseBdev1", 00:07:09.687 "uuid": "2851bdab-6953-2757-b220-4bf50e9e4815", 00:07:09.687 "is_configured": true, 00:07:09.687 "data_offset": 2048, 00:07:09.687 "data_size": 63488 00:07:09.687 }, 00:07:09.687 { 00:07:09.687 "name": "BaseBdev2", 00:07:09.687 "uuid": "20b17766-8a56-ae54-a29b-f99165fe1ce7", 00:07:09.687 "is_configured": true, 00:07:09.687 "data_offset": 2048, 00:07:09.687 "data_size": 63488 00:07:09.687 } 00:07:09.687 ] 00:07:09.687 }' 00:07:09.687 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:09.687 02:31:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.946 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:09.946 02:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:09.946 [2024-07-25 02:31:56.805899] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x393af74a0ec0 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.334 02:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.334 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:11.334 "name": "raid_bdev1", 00:07:11.334 "uuid": "0f9af152-4a2e-11ef-9c8e-7947904e2597", 00:07:11.334 "strip_size_kb": 64, 00:07:11.334 "state": "online", 00:07:11.334 "raid_level": "raid0", 00:07:11.334 "superblock": true, 00:07:11.334 "num_base_bdevs": 2, 00:07:11.334 "num_base_bdevs_discovered": 2, 00:07:11.334 "num_base_bdevs_operational": 2, 00:07:11.334 "base_bdevs_list": [ 00:07:11.334 { 00:07:11.334 "name": "BaseBdev1", 00:07:11.334 "uuid": "2851bdab-6953-2757-b220-4bf50e9e4815", 00:07:11.334 "is_configured": true, 00:07:11.334 "data_offset": 2048, 00:07:11.334 "data_size": 63488 00:07:11.334 }, 00:07:11.334 { 00:07:11.334 "name": "BaseBdev2", 00:07:11.334 "uuid": "20b17766-8a56-ae54-a29b-f99165fe1ce7", 00:07:11.334 "is_configured": true, 00:07:11.334 "data_offset": 2048, 00:07:11.334 "data_size": 63488 00:07:11.334 } 00:07:11.334 ] 00:07:11.334 }' 00:07:11.334 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:11.334 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:11.904 [2024-07-25 02:31:58.653989] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.904 [2024-07-25 02:31:58.654015] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.904 [2024-07-25 02:31:58.654271] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.904 [2024-07-25 02:31:58.654285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.904 [2024-07-25 02:31:58.654290] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.904 [2024-07-25 02:31:58.654293] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x393af7434f00 name raid_bdev1, state offline 00:07:11.904 0 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49564 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49564 ']' 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49564 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49564 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:11.904 killing process with pid 49564 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49564' 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49564 00:07:11.904 [2024-07-25 02:31:58.685381] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.904 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49564 00:07:11.904 [2024-07-25 02:31:58.694534] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.6XsjaLMjpQ 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:07:12.163 00:07:12.163 real 0m4.833s 00:07:12.163 user 0m7.137s 00:07:12.163 sys 0m0.812s 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.163 02:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.163 ************************************ 00:07:12.163 END TEST raid_write_error_test 00:07:12.163 ************************************ 00:07:12.163 02:31:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:12.163 02:31:58 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:12.163 02:31:58 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:12.163 02:31:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:12.163 02:31:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.163 02:31:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.163 ************************************ 00:07:12.163 START TEST raid_state_function_test 00:07:12.163 ************************************ 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:12.163 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49686 00:07:12.164 Process raid pid: 49686 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49686' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49686 /var/tmp/spdk-raid.sock 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49686 ']' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.164 02:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.164 [2024-07-25 02:31:58.939723] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:12.164 [2024-07-25 02:31:58.939999] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:12.732 EAL: TSC is not safe to use in SMP mode 00:07:12.732 EAL: TSC is not invariant 00:07:12.732 [2024-07-25 02:31:59.366836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.732 [2024-07-25 02:31:59.458364] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:12.732 [2024-07-25 02:31:59.460019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.732 [2024-07-25 02:31:59.460583] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.732 [2024-07-25 02:31:59.460595] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.991 02:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.991 02:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:12.991 02:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:13.251 [2024-07-25 02:32:00.043403] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.251 [2024-07-25 02:32:00.043440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.251 [2024-07-25 02:32:00.043444] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.251 [2024-07-25 02:32:00.043450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:13.251 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.510 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:13.510 "name": "Existed_Raid", 00:07:13.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.510 "strip_size_kb": 64, 00:07:13.510 "state": "configuring", 00:07:13.510 "raid_level": "concat", 00:07:13.510 "superblock": false, 00:07:13.510 "num_base_bdevs": 2, 00:07:13.510 "num_base_bdevs_discovered": 0, 00:07:13.510 "num_base_bdevs_operational": 2, 00:07:13.510 "base_bdevs_list": [ 00:07:13.510 { 00:07:13.510 "name": "BaseBdev1", 00:07:13.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.510 "is_configured": false, 00:07:13.510 "data_offset": 0, 00:07:13.510 "data_size": 0 00:07:13.510 }, 00:07:13.510 { 00:07:13.510 "name": "BaseBdev2", 00:07:13.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.510 "is_configured": false, 00:07:13.510 "data_offset": 0, 00:07:13.510 "data_size": 0 00:07:13.510 } 00:07:13.510 ] 00:07:13.510 }' 00:07:13.510 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:13.510 02:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.769 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:14.028 [2024-07-25 02:32:00.703392] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.028 [2024-07-25 02:32:00.703408] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x72772a34500 name Existed_Raid, state configuring 00:07:14.028 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:14.028 [2024-07-25 02:32:00.891398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.028 [2024-07-25 02:32:00.891425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.028 [2024-07-25 02:32:00.891428] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.028 [2024-07-25 02:32:00.891449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.028 02:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.286 [2024-07-25 02:32:01.080177] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.286 BaseBdev1 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:14.286 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:14.287 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:14.545 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.806 [ 00:07:14.806 { 00:07:14.806 "name": "BaseBdev1", 00:07:14.806 "aliases": [ 00:07:14.806 "127b44ad-4a2e-11ef-9c8e-7947904e2597" 00:07:14.806 ], 00:07:14.806 "product_name": "Malloc disk", 00:07:14.806 "block_size": 512, 00:07:14.806 "num_blocks": 65536, 00:07:14.806 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:14.806 "assigned_rate_limits": { 00:07:14.806 "rw_ios_per_sec": 0, 00:07:14.806 "rw_mbytes_per_sec": 0, 00:07:14.806 "r_mbytes_per_sec": 0, 00:07:14.806 "w_mbytes_per_sec": 0 00:07:14.806 }, 00:07:14.806 "claimed": true, 00:07:14.806 "claim_type": "exclusive_write", 00:07:14.806 "zoned": false, 00:07:14.806 "supported_io_types": { 00:07:14.806 "read": true, 00:07:14.806 "write": true, 00:07:14.806 "unmap": true, 00:07:14.806 "flush": true, 00:07:14.806 "reset": true, 00:07:14.806 "nvme_admin": false, 00:07:14.806 "nvme_io": false, 00:07:14.806 "nvme_io_md": false, 00:07:14.806 "write_zeroes": true, 00:07:14.806 "zcopy": true, 00:07:14.806 "get_zone_info": false, 00:07:14.806 "zone_management": false, 00:07:14.806 "zone_append": false, 00:07:14.806 "compare": false, 00:07:14.806 "compare_and_write": false, 00:07:14.806 "abort": true, 00:07:14.806 "seek_hole": false, 00:07:14.806 "seek_data": false, 00:07:14.806 "copy": true, 00:07:14.806 "nvme_iov_md": false 00:07:14.806 }, 00:07:14.806 "memory_domains": [ 00:07:14.806 { 00:07:14.806 "dma_device_id": "system", 00:07:14.806 "dma_device_type": 1 00:07:14.806 }, 00:07:14.806 { 00:07:14.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.806 "dma_device_type": 2 00:07:14.806 } 00:07:14.806 ], 00:07:14.806 "driver_specific": {} 00:07:14.806 } 00:07:14.806 ] 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:14.806 "name": "Existed_Raid", 00:07:14.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.806 "strip_size_kb": 64, 00:07:14.806 "state": "configuring", 00:07:14.806 "raid_level": "concat", 00:07:14.806 "superblock": false, 00:07:14.806 "num_base_bdevs": 2, 00:07:14.806 "num_base_bdevs_discovered": 1, 00:07:14.806 "num_base_bdevs_operational": 2, 00:07:14.806 "base_bdevs_list": [ 00:07:14.806 { 00:07:14.806 "name": "BaseBdev1", 00:07:14.806 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:14.806 "is_configured": true, 00:07:14.806 "data_offset": 0, 00:07:14.806 "data_size": 65536 00:07:14.806 }, 00:07:14.806 { 00:07:14.806 "name": "BaseBdev2", 00:07:14.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.806 "is_configured": false, 00:07:14.806 "data_offset": 0, 00:07:14.806 "data_size": 0 00:07:14.806 } 00:07:14.806 ] 00:07:14.806 }' 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:14.806 02:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.064 02:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:15.323 [2024-07-25 02:32:02.115414] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.323 [2024-07-25 02:32:02.115433] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x72772a34500 name Existed_Raid, state configuring 00:07:15.323 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:15.581 [2024-07-25 02:32:02.299428] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.581 [2024-07-25 02:32:02.300024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.581 [2024-07-25 02:32:02.300057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.581 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.839 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:15.839 "name": "Existed_Raid", 00:07:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.839 "strip_size_kb": 64, 00:07:15.839 "state": "configuring", 00:07:15.839 "raid_level": "concat", 00:07:15.839 "superblock": false, 00:07:15.839 "num_base_bdevs": 2, 00:07:15.839 "num_base_bdevs_discovered": 1, 00:07:15.839 "num_base_bdevs_operational": 2, 00:07:15.839 "base_bdevs_list": [ 00:07:15.839 { 00:07:15.839 "name": "BaseBdev1", 00:07:15.839 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:15.839 "is_configured": true, 00:07:15.839 "data_offset": 0, 00:07:15.839 "data_size": 65536 00:07:15.839 }, 00:07:15.839 { 00:07:15.839 "name": "BaseBdev2", 00:07:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.839 "is_configured": false, 00:07:15.839 "data_offset": 0, 00:07:15.839 "data_size": 0 00:07:15.839 } 00:07:15.839 ] 00:07:15.839 }' 00:07:15.839 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:15.839 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.097 [2024-07-25 02:32:02.959542] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.097 [2024-07-25 02:32:02.959563] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x72772a34a00 00:07:16.097 [2024-07-25 02:32:02.959566] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.097 [2024-07-25 02:32:02.959582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x72772a97e20 00:07:16.097 [2024-07-25 02:32:02.959647] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x72772a34a00 00:07:16.097 [2024-07-25 02:32:02.959650] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x72772a34a00 00:07:16.097 [2024-07-25 02:32:02.959675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.097 BaseBdev2 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:16.097 02:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:16.356 02:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.617 [ 00:07:16.617 { 00:07:16.617 "name": "BaseBdev2", 00:07:16.617 "aliases": [ 00:07:16.617 "139a239b-4a2e-11ef-9c8e-7947904e2597" 00:07:16.617 ], 00:07:16.617 "product_name": "Malloc disk", 00:07:16.617 "block_size": 512, 00:07:16.617 "num_blocks": 65536, 00:07:16.617 "uuid": "139a239b-4a2e-11ef-9c8e-7947904e2597", 00:07:16.617 "assigned_rate_limits": { 00:07:16.617 "rw_ios_per_sec": 0, 00:07:16.617 "rw_mbytes_per_sec": 0, 00:07:16.617 "r_mbytes_per_sec": 0, 00:07:16.617 "w_mbytes_per_sec": 0 00:07:16.617 }, 00:07:16.617 "claimed": true, 00:07:16.617 "claim_type": "exclusive_write", 00:07:16.617 "zoned": false, 00:07:16.617 "supported_io_types": { 00:07:16.617 "read": true, 00:07:16.617 "write": true, 00:07:16.617 "unmap": true, 00:07:16.617 "flush": true, 00:07:16.617 "reset": true, 00:07:16.617 "nvme_admin": false, 00:07:16.617 "nvme_io": false, 00:07:16.617 "nvme_io_md": false, 00:07:16.617 "write_zeroes": true, 00:07:16.617 "zcopy": true, 00:07:16.617 "get_zone_info": false, 00:07:16.617 "zone_management": false, 00:07:16.617 "zone_append": false, 00:07:16.617 "compare": false, 00:07:16.617 "compare_and_write": false, 00:07:16.617 "abort": true, 00:07:16.617 "seek_hole": false, 00:07:16.617 "seek_data": false, 00:07:16.617 "copy": true, 00:07:16.617 "nvme_iov_md": false 00:07:16.617 }, 00:07:16.617 "memory_domains": [ 00:07:16.617 { 00:07:16.617 "dma_device_id": "system", 00:07:16.617 "dma_device_type": 1 00:07:16.617 }, 00:07:16.617 { 00:07:16.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.617 "dma_device_type": 2 00:07:16.617 } 00:07:16.617 ], 00:07:16.617 "driver_specific": {} 00:07:16.617 } 00:07:16.617 ] 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.617 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.876 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:16.876 "name": "Existed_Raid", 00:07:16.876 "uuid": "139a2831-4a2e-11ef-9c8e-7947904e2597", 00:07:16.876 "strip_size_kb": 64, 00:07:16.876 "state": "online", 00:07:16.876 "raid_level": "concat", 00:07:16.876 "superblock": false, 00:07:16.876 "num_base_bdevs": 2, 00:07:16.876 "num_base_bdevs_discovered": 2, 00:07:16.876 "num_base_bdevs_operational": 2, 00:07:16.876 "base_bdevs_list": [ 00:07:16.876 { 00:07:16.876 "name": "BaseBdev1", 00:07:16.876 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:16.876 "is_configured": true, 00:07:16.876 "data_offset": 0, 00:07:16.876 "data_size": 65536 00:07:16.876 }, 00:07:16.876 { 00:07:16.876 "name": "BaseBdev2", 00:07:16.876 "uuid": "139a239b-4a2e-11ef-9c8e-7947904e2597", 00:07:16.876 "is_configured": true, 00:07:16.876 "data_offset": 0, 00:07:16.876 "data_size": 65536 00:07:16.876 } 00:07:16.876 ] 00:07:16.876 }' 00:07:16.876 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:16.876 02:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:17.135 [2024-07-25 02:32:03.971476] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.135 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:17.135 "name": "Existed_Raid", 00:07:17.135 "aliases": [ 00:07:17.135 "139a2831-4a2e-11ef-9c8e-7947904e2597" 00:07:17.135 ], 00:07:17.135 "product_name": "Raid Volume", 00:07:17.135 "block_size": 512, 00:07:17.135 "num_blocks": 131072, 00:07:17.135 "uuid": "139a2831-4a2e-11ef-9c8e-7947904e2597", 00:07:17.135 "assigned_rate_limits": { 00:07:17.135 "rw_ios_per_sec": 0, 00:07:17.135 "rw_mbytes_per_sec": 0, 00:07:17.135 "r_mbytes_per_sec": 0, 00:07:17.135 "w_mbytes_per_sec": 0 00:07:17.135 }, 00:07:17.135 "claimed": false, 00:07:17.135 "zoned": false, 00:07:17.135 "supported_io_types": { 00:07:17.135 "read": true, 00:07:17.135 "write": true, 00:07:17.135 "unmap": true, 00:07:17.135 "flush": true, 00:07:17.135 "reset": true, 00:07:17.135 "nvme_admin": false, 00:07:17.135 "nvme_io": false, 00:07:17.135 "nvme_io_md": false, 00:07:17.135 "write_zeroes": true, 00:07:17.135 "zcopy": false, 00:07:17.135 "get_zone_info": false, 00:07:17.135 "zone_management": false, 00:07:17.135 "zone_append": false, 00:07:17.135 "compare": false, 00:07:17.135 "compare_and_write": false, 00:07:17.135 "abort": false, 00:07:17.135 "seek_hole": false, 00:07:17.135 "seek_data": false, 00:07:17.135 "copy": false, 00:07:17.135 "nvme_iov_md": false 00:07:17.135 }, 00:07:17.135 "memory_domains": [ 00:07:17.135 { 00:07:17.135 "dma_device_id": "system", 00:07:17.136 "dma_device_type": 1 00:07:17.136 }, 00:07:17.136 { 00:07:17.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.136 "dma_device_type": 2 00:07:17.136 }, 00:07:17.136 { 00:07:17.136 "dma_device_id": "system", 00:07:17.136 "dma_device_type": 1 00:07:17.136 }, 00:07:17.136 { 00:07:17.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.136 "dma_device_type": 2 00:07:17.136 } 00:07:17.136 ], 00:07:17.136 "driver_specific": { 00:07:17.136 "raid": { 00:07:17.136 "uuid": "139a2831-4a2e-11ef-9c8e-7947904e2597", 00:07:17.136 "strip_size_kb": 64, 00:07:17.136 "state": "online", 00:07:17.136 "raid_level": "concat", 00:07:17.136 "superblock": false, 00:07:17.136 "num_base_bdevs": 2, 00:07:17.136 "num_base_bdevs_discovered": 2, 00:07:17.136 "num_base_bdevs_operational": 2, 00:07:17.136 "base_bdevs_list": [ 00:07:17.136 { 00:07:17.136 "name": "BaseBdev1", 00:07:17.136 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:17.136 "is_configured": true, 00:07:17.136 "data_offset": 0, 00:07:17.136 "data_size": 65536 00:07:17.136 }, 00:07:17.136 { 00:07:17.136 "name": "BaseBdev2", 00:07:17.136 "uuid": "139a239b-4a2e-11ef-9c8e-7947904e2597", 00:07:17.136 "is_configured": true, 00:07:17.136 "data_offset": 0, 00:07:17.136 "data_size": 65536 00:07:17.136 } 00:07:17.136 ] 00:07:17.136 } 00:07:17.136 } 00:07:17.136 }' 00:07:17.136 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.136 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:17.136 BaseBdev2' 00:07:17.136 02:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:17.136 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:17.136 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:17.396 "name": "BaseBdev1", 00:07:17.396 "aliases": [ 00:07:17.396 "127b44ad-4a2e-11ef-9c8e-7947904e2597" 00:07:17.396 ], 00:07:17.396 "product_name": "Malloc disk", 00:07:17.396 "block_size": 512, 00:07:17.396 "num_blocks": 65536, 00:07:17.396 "uuid": "127b44ad-4a2e-11ef-9c8e-7947904e2597", 00:07:17.396 "assigned_rate_limits": { 00:07:17.396 "rw_ios_per_sec": 0, 00:07:17.396 "rw_mbytes_per_sec": 0, 00:07:17.396 "r_mbytes_per_sec": 0, 00:07:17.396 "w_mbytes_per_sec": 0 00:07:17.396 }, 00:07:17.396 "claimed": true, 00:07:17.396 "claim_type": "exclusive_write", 00:07:17.396 "zoned": false, 00:07:17.396 "supported_io_types": { 00:07:17.396 "read": true, 00:07:17.396 "write": true, 00:07:17.396 "unmap": true, 00:07:17.396 "flush": true, 00:07:17.396 "reset": true, 00:07:17.396 "nvme_admin": false, 00:07:17.396 "nvme_io": false, 00:07:17.396 "nvme_io_md": false, 00:07:17.396 "write_zeroes": true, 00:07:17.396 "zcopy": true, 00:07:17.396 "get_zone_info": false, 00:07:17.396 "zone_management": false, 00:07:17.396 "zone_append": false, 00:07:17.396 "compare": false, 00:07:17.396 "compare_and_write": false, 00:07:17.396 "abort": true, 00:07:17.396 "seek_hole": false, 00:07:17.396 "seek_data": false, 00:07:17.396 "copy": true, 00:07:17.396 "nvme_iov_md": false 00:07:17.396 }, 00:07:17.396 "memory_domains": [ 00:07:17.396 { 00:07:17.396 "dma_device_id": "system", 00:07:17.396 "dma_device_type": 1 00:07:17.396 }, 00:07:17.396 { 00:07:17.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.396 "dma_device_type": 2 00:07:17.396 } 00:07:17.396 ], 00:07:17.396 "driver_specific": {} 00:07:17.396 }' 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.396 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.655 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:17.655 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:17.655 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:17.655 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:17.656 "name": "BaseBdev2", 00:07:17.656 "aliases": [ 00:07:17.656 "139a239b-4a2e-11ef-9c8e-7947904e2597" 00:07:17.656 ], 00:07:17.656 "product_name": "Malloc disk", 00:07:17.656 "block_size": 512, 00:07:17.656 "num_blocks": 65536, 00:07:17.656 "uuid": "139a239b-4a2e-11ef-9c8e-7947904e2597", 00:07:17.656 "assigned_rate_limits": { 00:07:17.656 "rw_ios_per_sec": 0, 00:07:17.656 "rw_mbytes_per_sec": 0, 00:07:17.656 "r_mbytes_per_sec": 0, 00:07:17.656 "w_mbytes_per_sec": 0 00:07:17.656 }, 00:07:17.656 "claimed": true, 00:07:17.656 "claim_type": "exclusive_write", 00:07:17.656 "zoned": false, 00:07:17.656 "supported_io_types": { 00:07:17.656 "read": true, 00:07:17.656 "write": true, 00:07:17.656 "unmap": true, 00:07:17.656 "flush": true, 00:07:17.656 "reset": true, 00:07:17.656 "nvme_admin": false, 00:07:17.656 "nvme_io": false, 00:07:17.656 "nvme_io_md": false, 00:07:17.656 "write_zeroes": true, 00:07:17.656 "zcopy": true, 00:07:17.656 "get_zone_info": false, 00:07:17.656 "zone_management": false, 00:07:17.656 "zone_append": false, 00:07:17.656 "compare": false, 00:07:17.656 "compare_and_write": false, 00:07:17.656 "abort": true, 00:07:17.656 "seek_hole": false, 00:07:17.656 "seek_data": false, 00:07:17.656 "copy": true, 00:07:17.656 "nvme_iov_md": false 00:07:17.656 }, 00:07:17.656 "memory_domains": [ 00:07:17.656 { 00:07:17.656 "dma_device_id": "system", 00:07:17.656 "dma_device_type": 1 00:07:17.656 }, 00:07:17.656 { 00:07:17.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.656 "dma_device_type": 2 00:07:17.656 } 00:07:17.656 ], 00:07:17.656 "driver_specific": {} 00:07:17.656 }' 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:17.656 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:17.916 [2024-07-25 02:32:04.747474] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.916 [2024-07-25 02:32:04.747487] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.916 [2024-07-25 02:32:04.747498] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.916 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.176 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:18.176 "name": "Existed_Raid", 00:07:18.176 "uuid": "139a2831-4a2e-11ef-9c8e-7947904e2597", 00:07:18.176 "strip_size_kb": 64, 00:07:18.176 "state": "offline", 00:07:18.176 "raid_level": "concat", 00:07:18.176 "superblock": false, 00:07:18.176 "num_base_bdevs": 2, 00:07:18.176 "num_base_bdevs_discovered": 1, 00:07:18.176 "num_base_bdevs_operational": 1, 00:07:18.176 "base_bdevs_list": [ 00:07:18.176 { 00:07:18.176 "name": null, 00:07:18.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.176 "is_configured": false, 00:07:18.176 "data_offset": 0, 00:07:18.176 "data_size": 65536 00:07:18.176 }, 00:07:18.176 { 00:07:18.176 "name": "BaseBdev2", 00:07:18.176 "uuid": "139a239b-4a2e-11ef-9c8e-7947904e2597", 00:07:18.176 "is_configured": true, 00:07:18.176 "data_offset": 0, 00:07:18.176 "data_size": 65536 00:07:18.176 } 00:07:18.176 ] 00:07:18.176 }' 00:07:18.176 02:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:18.176 02:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.435 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:18.435 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:18.435 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.435 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:18.695 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:18.695 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.695 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:18.955 [2024-07-25 02:32:05.592083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.955 [2024-07-25 02:32:05.592103] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x72772a34a00 name Existed_Raid, state offline 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49686 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49686 ']' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49686 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49686 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:18.955 killing process with pid 49686 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49686' 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49686 00:07:18.955 [2024-07-25 02:32:05.818919] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.955 [2024-07-25 02:32:05.818952] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.955 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49686 00:07:19.217 02:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:19.217 00:07:19.217 real 0m7.063s 00:07:19.217 user 0m11.994s 00:07:19.217 sys 0m1.453s 00:07:19.217 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.217 02:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.217 ************************************ 00:07:19.217 END TEST raid_state_function_test 00:07:19.217 ************************************ 00:07:19.217 02:32:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:19.217 02:32:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:19.217 02:32:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:19.217 02:32:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.217 02:32:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.217 ************************************ 00:07:19.217 START TEST raid_state_function_test_sb 00:07:19.217 ************************************ 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49949 00:07:19.217 Process raid pid: 49949 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49949' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49949 /var/tmp/spdk-raid.sock 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49949 ']' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.217 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.217 [2024-07-25 02:32:06.055802] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:19.217 [2024-07-25 02:32:06.056073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:19.797 EAL: TSC is not safe to use in SMP mode 00:07:19.797 EAL: TSC is not invariant 00:07:19.797 [2024-07-25 02:32:06.484134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.797 [2024-07-25 02:32:06.575141] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:19.797 [2024-07-25 02:32:06.576868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.797 [2024-07-25 02:32:06.577457] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.797 [2024-07-25 02:32:06.577468] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.056 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.056 02:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:20.056 02:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:20.316 [2024-07-25 02:32:07.080328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.316 [2024-07-25 02:32:07.080365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.316 [2024-07-25 02:32:07.080368] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.316 [2024-07-25 02:32:07.080374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.316 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.574 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.574 "name": "Existed_Raid", 00:07:20.574 "uuid": "160eef1f-4a2e-11ef-9c8e-7947904e2597", 00:07:20.574 "strip_size_kb": 64, 00:07:20.574 "state": "configuring", 00:07:20.574 "raid_level": "concat", 00:07:20.574 "superblock": true, 00:07:20.574 "num_base_bdevs": 2, 00:07:20.574 "num_base_bdevs_discovered": 0, 00:07:20.574 "num_base_bdevs_operational": 2, 00:07:20.574 "base_bdevs_list": [ 00:07:20.574 { 00:07:20.574 "name": "BaseBdev1", 00:07:20.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.574 "is_configured": false, 00:07:20.574 "data_offset": 0, 00:07:20.574 "data_size": 0 00:07:20.574 }, 00:07:20.574 { 00:07:20.574 "name": "BaseBdev2", 00:07:20.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.574 "is_configured": false, 00:07:20.574 "data_offset": 0, 00:07:20.574 "data_size": 0 00:07:20.574 } 00:07:20.574 ] 00:07:20.574 }' 00:07:20.574 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.574 02:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.834 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:21.093 [2024-07-25 02:32:07.748309] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.093 [2024-07-25 02:32:07.748327] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x149c034500 name Existed_Raid, state configuring 00:07:21.093 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:21.093 [2024-07-25 02:32:07.936315] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.093 [2024-07-25 02:32:07.936340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.093 [2024-07-25 02:32:07.936344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.093 [2024-07-25 02:32:07.936365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.093 02:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.351 [2024-07-25 02:32:08.121129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.351 BaseBdev1 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:21.351 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:21.610 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.869 [ 00:07:21.869 { 00:07:21.869 "name": "BaseBdev1", 00:07:21.869 "aliases": [ 00:07:21.869 "16ada005-4a2e-11ef-9c8e-7947904e2597" 00:07:21.869 ], 00:07:21.869 "product_name": "Malloc disk", 00:07:21.869 "block_size": 512, 00:07:21.869 "num_blocks": 65536, 00:07:21.869 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:21.869 "assigned_rate_limits": { 00:07:21.869 "rw_ios_per_sec": 0, 00:07:21.869 "rw_mbytes_per_sec": 0, 00:07:21.869 "r_mbytes_per_sec": 0, 00:07:21.869 "w_mbytes_per_sec": 0 00:07:21.869 }, 00:07:21.869 "claimed": true, 00:07:21.869 "claim_type": "exclusive_write", 00:07:21.869 "zoned": false, 00:07:21.869 "supported_io_types": { 00:07:21.869 "read": true, 00:07:21.869 "write": true, 00:07:21.869 "unmap": true, 00:07:21.869 "flush": true, 00:07:21.869 "reset": true, 00:07:21.869 "nvme_admin": false, 00:07:21.869 "nvme_io": false, 00:07:21.869 "nvme_io_md": false, 00:07:21.869 "write_zeroes": true, 00:07:21.869 "zcopy": true, 00:07:21.869 "get_zone_info": false, 00:07:21.869 "zone_management": false, 00:07:21.869 "zone_append": false, 00:07:21.869 "compare": false, 00:07:21.869 "compare_and_write": false, 00:07:21.869 "abort": true, 00:07:21.869 "seek_hole": false, 00:07:21.869 "seek_data": false, 00:07:21.869 "copy": true, 00:07:21.869 "nvme_iov_md": false 00:07:21.869 }, 00:07:21.869 "memory_domains": [ 00:07:21.869 { 00:07:21.869 "dma_device_id": "system", 00:07:21.869 "dma_device_type": 1 00:07:21.869 }, 00:07:21.869 { 00:07:21.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.869 "dma_device_type": 2 00:07:21.869 } 00:07:21.869 ], 00:07:21.869 "driver_specific": {} 00:07:21.869 } 00:07:21.869 ] 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:21.869 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:21.870 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.870 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.870 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:21.870 "name": "Existed_Raid", 00:07:21.870 "uuid": "16918c3d-4a2e-11ef-9c8e-7947904e2597", 00:07:21.870 "strip_size_kb": 64, 00:07:21.870 "state": "configuring", 00:07:21.870 "raid_level": "concat", 00:07:21.870 "superblock": true, 00:07:21.870 "num_base_bdevs": 2, 00:07:21.870 "num_base_bdevs_discovered": 1, 00:07:21.870 "num_base_bdevs_operational": 2, 00:07:21.870 "base_bdevs_list": [ 00:07:21.870 { 00:07:21.870 "name": "BaseBdev1", 00:07:21.870 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:21.870 "is_configured": true, 00:07:21.870 "data_offset": 2048, 00:07:21.870 "data_size": 63488 00:07:21.870 }, 00:07:21.870 { 00:07:21.870 "name": "BaseBdev2", 00:07:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.870 "is_configured": false, 00:07:21.870 "data_offset": 0, 00:07:21.870 "data_size": 0 00:07:21.870 } 00:07:21.870 ] 00:07:21.870 }' 00:07:21.870 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:21.870 02:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.128 02:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:22.387 [2024-07-25 02:32:09.144361] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.387 [2024-07-25 02:32:09.144379] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x149c034500 name Existed_Raid, state configuring 00:07:22.387 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:22.647 [2024-07-25 02:32:09.336373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.647 [2024-07-25 02:32:09.337067] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.647 [2024-07-25 02:32:09.337105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.647 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.907 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:22.907 "name": "Existed_Raid", 00:07:22.907 "uuid": "17672dec-4a2e-11ef-9c8e-7947904e2597", 00:07:22.907 "strip_size_kb": 64, 00:07:22.907 "state": "configuring", 00:07:22.907 "raid_level": "concat", 00:07:22.907 "superblock": true, 00:07:22.907 "num_base_bdevs": 2, 00:07:22.907 "num_base_bdevs_discovered": 1, 00:07:22.907 "num_base_bdevs_operational": 2, 00:07:22.907 "base_bdevs_list": [ 00:07:22.907 { 00:07:22.907 "name": "BaseBdev1", 00:07:22.907 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:22.907 "is_configured": true, 00:07:22.907 "data_offset": 2048, 00:07:22.907 "data_size": 63488 00:07:22.907 }, 00:07:22.907 { 00:07:22.907 "name": "BaseBdev2", 00:07:22.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.907 "is_configured": false, 00:07:22.907 "data_offset": 0, 00:07:22.907 "data_size": 0 00:07:22.907 } 00:07:22.907 ] 00:07:22.907 }' 00:07:22.907 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:22.907 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:23.166 [2024-07-25 02:32:09.984505] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.166 [2024-07-25 02:32:09.984570] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x149c034a00 00:07:23.166 [2024-07-25 02:32:09.984575] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.166 [2024-07-25 02:32:09.984593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x149c097e20 00:07:23.166 [2024-07-25 02:32:09.984624] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x149c034a00 00:07:23.166 [2024-07-25 02:32:09.984628] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x149c034a00 00:07:23.166 [2024-07-25 02:32:09.984643] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.166 BaseBdev2 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:23.166 02:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:23.166 02:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:23.426 02:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:23.685 [ 00:07:23.685 { 00:07:23.685 "name": "BaseBdev2", 00:07:23.685 "aliases": [ 00:07:23.685 "17ca0f47-4a2e-11ef-9c8e-7947904e2597" 00:07:23.685 ], 00:07:23.685 "product_name": "Malloc disk", 00:07:23.685 "block_size": 512, 00:07:23.685 "num_blocks": 65536, 00:07:23.685 "uuid": "17ca0f47-4a2e-11ef-9c8e-7947904e2597", 00:07:23.685 "assigned_rate_limits": { 00:07:23.685 "rw_ios_per_sec": 0, 00:07:23.685 "rw_mbytes_per_sec": 0, 00:07:23.685 "r_mbytes_per_sec": 0, 00:07:23.685 "w_mbytes_per_sec": 0 00:07:23.685 }, 00:07:23.685 "claimed": true, 00:07:23.685 "claim_type": "exclusive_write", 00:07:23.685 "zoned": false, 00:07:23.685 "supported_io_types": { 00:07:23.685 "read": true, 00:07:23.685 "write": true, 00:07:23.685 "unmap": true, 00:07:23.685 "flush": true, 00:07:23.685 "reset": true, 00:07:23.685 "nvme_admin": false, 00:07:23.685 "nvme_io": false, 00:07:23.685 "nvme_io_md": false, 00:07:23.685 "write_zeroes": true, 00:07:23.685 "zcopy": true, 00:07:23.685 "get_zone_info": false, 00:07:23.685 "zone_management": false, 00:07:23.685 "zone_append": false, 00:07:23.685 "compare": false, 00:07:23.685 "compare_and_write": false, 00:07:23.685 "abort": true, 00:07:23.685 "seek_hole": false, 00:07:23.685 "seek_data": false, 00:07:23.685 "copy": true, 00:07:23.685 "nvme_iov_md": false 00:07:23.685 }, 00:07:23.685 "memory_domains": [ 00:07:23.685 { 00:07:23.685 "dma_device_id": "system", 00:07:23.685 "dma_device_type": 1 00:07:23.685 }, 00:07:23.685 { 00:07:23.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.685 "dma_device_type": 2 00:07:23.685 } 00:07:23.685 ], 00:07:23.685 "driver_specific": {} 00:07:23.685 } 00:07:23.685 ] 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.685 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.944 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:23.944 "name": "Existed_Raid", 00:07:23.944 "uuid": "17672dec-4a2e-11ef-9c8e-7947904e2597", 00:07:23.944 "strip_size_kb": 64, 00:07:23.944 "state": "online", 00:07:23.944 "raid_level": "concat", 00:07:23.944 "superblock": true, 00:07:23.944 "num_base_bdevs": 2, 00:07:23.944 "num_base_bdevs_discovered": 2, 00:07:23.944 "num_base_bdevs_operational": 2, 00:07:23.944 "base_bdevs_list": [ 00:07:23.944 { 00:07:23.944 "name": "BaseBdev1", 00:07:23.944 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:23.944 "is_configured": true, 00:07:23.944 "data_offset": 2048, 00:07:23.944 "data_size": 63488 00:07:23.944 }, 00:07:23.944 { 00:07:23.944 "name": "BaseBdev2", 00:07:23.944 "uuid": "17ca0f47-4a2e-11ef-9c8e-7947904e2597", 00:07:23.944 "is_configured": true, 00:07:23.944 "data_offset": 2048, 00:07:23.944 "data_size": 63488 00:07:23.944 } 00:07:23.944 ] 00:07:23.944 }' 00:07:23.944 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:23.944 02:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:24.203 02:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:24.203 [2024-07-25 02:32:11.004419] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.203 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:24.203 "name": "Existed_Raid", 00:07:24.203 "aliases": [ 00:07:24.203 "17672dec-4a2e-11ef-9c8e-7947904e2597" 00:07:24.203 ], 00:07:24.203 "product_name": "Raid Volume", 00:07:24.203 "block_size": 512, 00:07:24.203 "num_blocks": 126976, 00:07:24.203 "uuid": "17672dec-4a2e-11ef-9c8e-7947904e2597", 00:07:24.203 "assigned_rate_limits": { 00:07:24.203 "rw_ios_per_sec": 0, 00:07:24.203 "rw_mbytes_per_sec": 0, 00:07:24.203 "r_mbytes_per_sec": 0, 00:07:24.203 "w_mbytes_per_sec": 0 00:07:24.203 }, 00:07:24.203 "claimed": false, 00:07:24.203 "zoned": false, 00:07:24.203 "supported_io_types": { 00:07:24.203 "read": true, 00:07:24.203 "write": true, 00:07:24.203 "unmap": true, 00:07:24.203 "flush": true, 00:07:24.203 "reset": true, 00:07:24.203 "nvme_admin": false, 00:07:24.203 "nvme_io": false, 00:07:24.203 "nvme_io_md": false, 00:07:24.203 "write_zeroes": true, 00:07:24.203 "zcopy": false, 00:07:24.203 "get_zone_info": false, 00:07:24.203 "zone_management": false, 00:07:24.203 "zone_append": false, 00:07:24.203 "compare": false, 00:07:24.203 "compare_and_write": false, 00:07:24.203 "abort": false, 00:07:24.203 "seek_hole": false, 00:07:24.203 "seek_data": false, 00:07:24.203 "copy": false, 00:07:24.203 "nvme_iov_md": false 00:07:24.203 }, 00:07:24.203 "memory_domains": [ 00:07:24.203 { 00:07:24.203 "dma_device_id": "system", 00:07:24.203 "dma_device_type": 1 00:07:24.203 }, 00:07:24.203 { 00:07:24.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.203 "dma_device_type": 2 00:07:24.203 }, 00:07:24.203 { 00:07:24.203 "dma_device_id": "system", 00:07:24.203 "dma_device_type": 1 00:07:24.203 }, 00:07:24.203 { 00:07:24.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.203 "dma_device_type": 2 00:07:24.203 } 00:07:24.203 ], 00:07:24.203 "driver_specific": { 00:07:24.203 "raid": { 00:07:24.203 "uuid": "17672dec-4a2e-11ef-9c8e-7947904e2597", 00:07:24.203 "strip_size_kb": 64, 00:07:24.203 "state": "online", 00:07:24.203 "raid_level": "concat", 00:07:24.203 "superblock": true, 00:07:24.203 "num_base_bdevs": 2, 00:07:24.203 "num_base_bdevs_discovered": 2, 00:07:24.203 "num_base_bdevs_operational": 2, 00:07:24.203 "base_bdevs_list": [ 00:07:24.203 { 00:07:24.203 "name": "BaseBdev1", 00:07:24.203 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:24.203 "is_configured": true, 00:07:24.203 "data_offset": 2048, 00:07:24.203 "data_size": 63488 00:07:24.203 }, 00:07:24.203 { 00:07:24.203 "name": "BaseBdev2", 00:07:24.203 "uuid": "17ca0f47-4a2e-11ef-9c8e-7947904e2597", 00:07:24.203 "is_configured": true, 00:07:24.203 "data_offset": 2048, 00:07:24.204 "data_size": 63488 00:07:24.204 } 00:07:24.204 ] 00:07:24.204 } 00:07:24.204 } 00:07:24.204 }' 00:07:24.204 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.204 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:24.204 BaseBdev2' 00:07:24.204 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:24.204 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:24.204 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:24.464 "name": "BaseBdev1", 00:07:24.464 "aliases": [ 00:07:24.464 "16ada005-4a2e-11ef-9c8e-7947904e2597" 00:07:24.464 ], 00:07:24.464 "product_name": "Malloc disk", 00:07:24.464 "block_size": 512, 00:07:24.464 "num_blocks": 65536, 00:07:24.464 "uuid": "16ada005-4a2e-11ef-9c8e-7947904e2597", 00:07:24.464 "assigned_rate_limits": { 00:07:24.464 "rw_ios_per_sec": 0, 00:07:24.464 "rw_mbytes_per_sec": 0, 00:07:24.464 "r_mbytes_per_sec": 0, 00:07:24.464 "w_mbytes_per_sec": 0 00:07:24.464 }, 00:07:24.464 "claimed": true, 00:07:24.464 "claim_type": "exclusive_write", 00:07:24.464 "zoned": false, 00:07:24.464 "supported_io_types": { 00:07:24.464 "read": true, 00:07:24.464 "write": true, 00:07:24.464 "unmap": true, 00:07:24.464 "flush": true, 00:07:24.464 "reset": true, 00:07:24.464 "nvme_admin": false, 00:07:24.464 "nvme_io": false, 00:07:24.464 "nvme_io_md": false, 00:07:24.464 "write_zeroes": true, 00:07:24.464 "zcopy": true, 00:07:24.464 "get_zone_info": false, 00:07:24.464 "zone_management": false, 00:07:24.464 "zone_append": false, 00:07:24.464 "compare": false, 00:07:24.464 "compare_and_write": false, 00:07:24.464 "abort": true, 00:07:24.464 "seek_hole": false, 00:07:24.464 "seek_data": false, 00:07:24.464 "copy": true, 00:07:24.464 "nvme_iov_md": false 00:07:24.464 }, 00:07:24.464 "memory_domains": [ 00:07:24.464 { 00:07:24.464 "dma_device_id": "system", 00:07:24.464 "dma_device_type": 1 00:07:24.464 }, 00:07:24.464 { 00:07:24.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.464 "dma_device_type": 2 00:07:24.464 } 00:07:24.464 ], 00:07:24.464 "driver_specific": {} 00:07:24.464 }' 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:24.464 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:24.723 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:24.724 "name": "BaseBdev2", 00:07:24.724 "aliases": [ 00:07:24.724 "17ca0f47-4a2e-11ef-9c8e-7947904e2597" 00:07:24.724 ], 00:07:24.724 "product_name": "Malloc disk", 00:07:24.724 "block_size": 512, 00:07:24.724 "num_blocks": 65536, 00:07:24.724 "uuid": "17ca0f47-4a2e-11ef-9c8e-7947904e2597", 00:07:24.724 "assigned_rate_limits": { 00:07:24.724 "rw_ios_per_sec": 0, 00:07:24.724 "rw_mbytes_per_sec": 0, 00:07:24.724 "r_mbytes_per_sec": 0, 00:07:24.724 "w_mbytes_per_sec": 0 00:07:24.724 }, 00:07:24.724 "claimed": true, 00:07:24.724 "claim_type": "exclusive_write", 00:07:24.724 "zoned": false, 00:07:24.724 "supported_io_types": { 00:07:24.724 "read": true, 00:07:24.724 "write": true, 00:07:24.724 "unmap": true, 00:07:24.724 "flush": true, 00:07:24.724 "reset": true, 00:07:24.724 "nvme_admin": false, 00:07:24.724 "nvme_io": false, 00:07:24.724 "nvme_io_md": false, 00:07:24.724 "write_zeroes": true, 00:07:24.724 "zcopy": true, 00:07:24.724 "get_zone_info": false, 00:07:24.724 "zone_management": false, 00:07:24.724 "zone_append": false, 00:07:24.724 "compare": false, 00:07:24.724 "compare_and_write": false, 00:07:24.724 "abort": true, 00:07:24.724 "seek_hole": false, 00:07:24.724 "seek_data": false, 00:07:24.724 "copy": true, 00:07:24.724 "nvme_iov_md": false 00:07:24.724 }, 00:07:24.724 "memory_domains": [ 00:07:24.724 { 00:07:24.724 "dma_device_id": "system", 00:07:24.724 "dma_device_type": 1 00:07:24.724 }, 00:07:24.724 { 00:07:24.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.724 "dma_device_type": 2 00:07:24.724 } 00:07:24.724 ], 00:07:24.724 "driver_specific": {} 00:07:24.724 }' 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:24.724 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:24.983 [2024-07-25 02:32:11.768455] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:24.983 [2024-07-25 02:32:11.768470] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.983 [2024-07-25 02:32:11.768478] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.983 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.242 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.242 "name": "Existed_Raid", 00:07:25.242 "uuid": "17672dec-4a2e-11ef-9c8e-7947904e2597", 00:07:25.242 "strip_size_kb": 64, 00:07:25.242 "state": "offline", 00:07:25.242 "raid_level": "concat", 00:07:25.242 "superblock": true, 00:07:25.242 "num_base_bdevs": 2, 00:07:25.242 "num_base_bdevs_discovered": 1, 00:07:25.242 "num_base_bdevs_operational": 1, 00:07:25.242 "base_bdevs_list": [ 00:07:25.242 { 00:07:25.242 "name": null, 00:07:25.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.242 "is_configured": false, 00:07:25.242 "data_offset": 2048, 00:07:25.242 "data_size": 63488 00:07:25.242 }, 00:07:25.242 { 00:07:25.242 "name": "BaseBdev2", 00:07:25.242 "uuid": "17ca0f47-4a2e-11ef-9c8e-7947904e2597", 00:07:25.242 "is_configured": true, 00:07:25.242 "data_offset": 2048, 00:07:25.242 "data_size": 63488 00:07:25.242 } 00:07:25.242 ] 00:07:25.242 }' 00:07:25.242 02:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.242 02:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.500 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:25.500 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:25.500 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.500 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:25.758 [2024-07-25 02:32:12.613118] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.758 [2024-07-25 02:32:12.613136] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x149c034a00 name Existed_Raid, state offline 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.758 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.017 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:26.017 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:26.017 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:26.017 02:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49949 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49949 ']' 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49949 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49949 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:26.018 killing process with pid 49949 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49949' 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49949 00:07:26.018 [2024-07-25 02:32:12.838138] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.018 [2024-07-25 02:32:12.838172] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.018 02:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49949 00:07:26.277 02:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:26.277 00:07:26.277 real 0m6.967s 00:07:26.277 user 0m11.957s 00:07:26.277 sys 0m1.332s 00:07:26.277 02:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.277 02:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.277 ************************************ 00:07:26.277 END TEST raid_state_function_test_sb 00:07:26.277 ************************************ 00:07:26.277 02:32:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:26.277 02:32:13 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:26.277 02:32:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:26.277 02:32:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.277 02:32:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.277 ************************************ 00:07:26.277 START TEST raid_superblock_test 00:07:26.277 ************************************ 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50215 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50215 /var/tmp/spdk-raid.sock 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50215 ']' 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.277 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.277 [2024-07-25 02:32:13.072227] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:26.277 [2024-07-25 02:32:13.072614] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:26.842 EAL: TSC is not safe to use in SMP mode 00:07:26.842 EAL: TSC is not invariant 00:07:26.842 [2024-07-25 02:32:13.490128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.842 [2024-07-25 02:32:13.581941] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:26.842 [2024-07-25 02:32:13.583609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.842 [2024-07-25 02:32:13.584179] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.842 [2024-07-25 02:32:13.584195] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.100 02:32:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:27.357 malloc1 00:07:27.357 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.615 [2024-07-25 02:32:14.291019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.615 [2024-07-25 02:32:14.291058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.615 [2024-07-25 02:32:14.291065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x308cf9234780 00:07:27.615 [2024-07-25 02:32:14.291071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.615 [2024-07-25 02:32:14.291736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.615 [2024-07-25 02:32:14.291763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.615 pt1 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:27.615 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:27.615 malloc2 00:07:27.874 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.874 [2024-07-25 02:32:14.667025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.874 [2024-07-25 02:32:14.667081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.874 [2024-07-25 02:32:14.667089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x308cf9234c80 00:07:27.874 [2024-07-25 02:32:14.667094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.874 [2024-07-25 02:32:14.667539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.874 [2024-07-25 02:32:14.667566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.874 pt2 00:07:27.874 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:27.874 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:27.874 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:28.133 [2024-07-25 02:32:14.851035] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:28.133 [2024-07-25 02:32:14.851416] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:28.133 [2024-07-25 02:32:14.851460] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x308cf9234f00 00:07:28.133 [2024-07-25 02:32:14.851469] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.133 [2024-07-25 02:32:14.851494] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x308cf9297e20 00:07:28.133 [2024-07-25 02:32:14.851549] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x308cf9234f00 00:07:28.133 [2024-07-25 02:32:14.851556] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x308cf9234f00 00:07:28.133 [2024-07-25 02:32:14.851574] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.133 02:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.392 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:28.392 "name": "raid_bdev1", 00:07:28.392 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:28.392 "strip_size_kb": 64, 00:07:28.392 "state": "online", 00:07:28.392 "raid_level": "concat", 00:07:28.392 "superblock": true, 00:07:28.392 "num_base_bdevs": 2, 00:07:28.392 "num_base_bdevs_discovered": 2, 00:07:28.392 "num_base_bdevs_operational": 2, 00:07:28.392 "base_bdevs_list": [ 00:07:28.392 { 00:07:28.392 "name": "pt1", 00:07:28.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.392 "is_configured": true, 00:07:28.392 "data_offset": 2048, 00:07:28.392 "data_size": 63488 00:07:28.392 }, 00:07:28.392 { 00:07:28.392 "name": "pt2", 00:07:28.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.392 "is_configured": true, 00:07:28.392 "data_offset": 2048, 00:07:28.392 "data_size": 63488 00:07:28.392 } 00:07:28.392 ] 00:07:28.392 }' 00:07:28.392 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:28.392 02:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:28.651 [2024-07-25 02:32:15.495048] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:28.651 "name": "raid_bdev1", 00:07:28.651 "aliases": [ 00:07:28.651 "1ab0a660-4a2e-11ef-9c8e-7947904e2597" 00:07:28.651 ], 00:07:28.651 "product_name": "Raid Volume", 00:07:28.651 "block_size": 512, 00:07:28.651 "num_blocks": 126976, 00:07:28.651 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:28.651 "assigned_rate_limits": { 00:07:28.651 "rw_ios_per_sec": 0, 00:07:28.651 "rw_mbytes_per_sec": 0, 00:07:28.651 "r_mbytes_per_sec": 0, 00:07:28.651 "w_mbytes_per_sec": 0 00:07:28.651 }, 00:07:28.651 "claimed": false, 00:07:28.651 "zoned": false, 00:07:28.651 "supported_io_types": { 00:07:28.651 "read": true, 00:07:28.651 "write": true, 00:07:28.651 "unmap": true, 00:07:28.651 "flush": true, 00:07:28.651 "reset": true, 00:07:28.651 "nvme_admin": false, 00:07:28.651 "nvme_io": false, 00:07:28.651 "nvme_io_md": false, 00:07:28.651 "write_zeroes": true, 00:07:28.651 "zcopy": false, 00:07:28.651 "get_zone_info": false, 00:07:28.651 "zone_management": false, 00:07:28.651 "zone_append": false, 00:07:28.651 "compare": false, 00:07:28.651 "compare_and_write": false, 00:07:28.651 "abort": false, 00:07:28.651 "seek_hole": false, 00:07:28.651 "seek_data": false, 00:07:28.651 "copy": false, 00:07:28.651 "nvme_iov_md": false 00:07:28.651 }, 00:07:28.651 "memory_domains": [ 00:07:28.651 { 00:07:28.651 "dma_device_id": "system", 00:07:28.651 "dma_device_type": 1 00:07:28.651 }, 00:07:28.651 { 00:07:28.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.651 "dma_device_type": 2 00:07:28.651 }, 00:07:28.651 { 00:07:28.651 "dma_device_id": "system", 00:07:28.651 "dma_device_type": 1 00:07:28.651 }, 00:07:28.651 { 00:07:28.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.651 "dma_device_type": 2 00:07:28.651 } 00:07:28.651 ], 00:07:28.651 "driver_specific": { 00:07:28.651 "raid": { 00:07:28.651 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:28.651 "strip_size_kb": 64, 00:07:28.651 "state": "online", 00:07:28.651 "raid_level": "concat", 00:07:28.651 "superblock": true, 00:07:28.651 "num_base_bdevs": 2, 00:07:28.651 "num_base_bdevs_discovered": 2, 00:07:28.651 "num_base_bdevs_operational": 2, 00:07:28.651 "base_bdevs_list": [ 00:07:28.651 { 00:07:28.651 "name": "pt1", 00:07:28.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.651 "is_configured": true, 00:07:28.651 "data_offset": 2048, 00:07:28.651 "data_size": 63488 00:07:28.651 }, 00:07:28.651 { 00:07:28.651 "name": "pt2", 00:07:28.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:28.651 "is_configured": true, 00:07:28.651 "data_offset": 2048, 00:07:28.651 "data_size": 63488 00:07:28.651 } 00:07:28.651 ] 00:07:28.651 } 00:07:28.651 } 00:07:28.651 }' 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:28.651 pt2' 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:28.651 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:28.910 "name": "pt1", 00:07:28.910 "aliases": [ 00:07:28.910 "00000000-0000-0000-0000-000000000001" 00:07:28.910 ], 00:07:28.910 "product_name": "passthru", 00:07:28.910 "block_size": 512, 00:07:28.910 "num_blocks": 65536, 00:07:28.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:28.910 "assigned_rate_limits": { 00:07:28.910 "rw_ios_per_sec": 0, 00:07:28.910 "rw_mbytes_per_sec": 0, 00:07:28.910 "r_mbytes_per_sec": 0, 00:07:28.910 "w_mbytes_per_sec": 0 00:07:28.910 }, 00:07:28.910 "claimed": true, 00:07:28.910 "claim_type": "exclusive_write", 00:07:28.910 "zoned": false, 00:07:28.910 "supported_io_types": { 00:07:28.910 "read": true, 00:07:28.910 "write": true, 00:07:28.910 "unmap": true, 00:07:28.910 "flush": true, 00:07:28.910 "reset": true, 00:07:28.910 "nvme_admin": false, 00:07:28.910 "nvme_io": false, 00:07:28.910 "nvme_io_md": false, 00:07:28.910 "write_zeroes": true, 00:07:28.910 "zcopy": true, 00:07:28.910 "get_zone_info": false, 00:07:28.910 "zone_management": false, 00:07:28.910 "zone_append": false, 00:07:28.910 "compare": false, 00:07:28.910 "compare_and_write": false, 00:07:28.910 "abort": true, 00:07:28.910 "seek_hole": false, 00:07:28.910 "seek_data": false, 00:07:28.910 "copy": true, 00:07:28.910 "nvme_iov_md": false 00:07:28.910 }, 00:07:28.910 "memory_domains": [ 00:07:28.910 { 00:07:28.910 "dma_device_id": "system", 00:07:28.910 "dma_device_type": 1 00:07:28.910 }, 00:07:28.910 { 00:07:28.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.910 "dma_device_type": 2 00:07:28.910 } 00:07:28.910 ], 00:07:28.910 "driver_specific": { 00:07:28.910 "passthru": { 00:07:28.910 "name": "pt1", 00:07:28.910 "base_bdev_name": "malloc1" 00:07:28.910 } 00:07:28.910 } 00:07:28.910 }' 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:28.910 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:29.169 "name": "pt2", 00:07:29.169 "aliases": [ 00:07:29.169 "00000000-0000-0000-0000-000000000002" 00:07:29.169 ], 00:07:29.169 "product_name": "passthru", 00:07:29.169 "block_size": 512, 00:07:29.169 "num_blocks": 65536, 00:07:29.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:29.169 "assigned_rate_limits": { 00:07:29.169 "rw_ios_per_sec": 0, 00:07:29.169 "rw_mbytes_per_sec": 0, 00:07:29.169 "r_mbytes_per_sec": 0, 00:07:29.169 "w_mbytes_per_sec": 0 00:07:29.169 }, 00:07:29.169 "claimed": true, 00:07:29.169 "claim_type": "exclusive_write", 00:07:29.169 "zoned": false, 00:07:29.169 "supported_io_types": { 00:07:29.169 "read": true, 00:07:29.169 "write": true, 00:07:29.169 "unmap": true, 00:07:29.169 "flush": true, 00:07:29.169 "reset": true, 00:07:29.169 "nvme_admin": false, 00:07:29.169 "nvme_io": false, 00:07:29.169 "nvme_io_md": false, 00:07:29.169 "write_zeroes": true, 00:07:29.169 "zcopy": true, 00:07:29.169 "get_zone_info": false, 00:07:29.169 "zone_management": false, 00:07:29.169 "zone_append": false, 00:07:29.169 "compare": false, 00:07:29.169 "compare_and_write": false, 00:07:29.169 "abort": true, 00:07:29.169 "seek_hole": false, 00:07:29.169 "seek_data": false, 00:07:29.169 "copy": true, 00:07:29.169 "nvme_iov_md": false 00:07:29.169 }, 00:07:29.169 "memory_domains": [ 00:07:29.169 { 00:07:29.169 "dma_device_id": "system", 00:07:29.169 "dma_device_type": 1 00:07:29.169 }, 00:07:29.169 { 00:07:29.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.169 "dma_device_type": 2 00:07:29.169 } 00:07:29.169 ], 00:07:29.169 "driver_specific": { 00:07:29.169 "passthru": { 00:07:29.169 "name": "pt2", 00:07:29.169 "base_bdev_name": "malloc2" 00:07:29.169 } 00:07:29.169 } 00:07:29.169 }' 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:29.169 02:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:29.169 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:29.427 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:29.427 [2024-07-25 02:32:16.251050] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.428 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1ab0a660-4a2e-11ef-9c8e-7947904e2597 00:07:29.428 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1ab0a660-4a2e-11ef-9c8e-7947904e2597 ']' 00:07:29.428 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:29.687 [2024-07-25 02:32:16.439033] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.687 [2024-07-25 02:32:16.439046] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.687 [2024-07-25 02:32:16.439059] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.687 [2024-07-25 02:32:16.439083] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.687 [2024-07-25 02:32:16.439087] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x308cf9234f00 name raid_bdev1, state offline 00:07:29.687 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.687 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:29.945 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:29.945 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:29.945 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:29.945 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:30.202 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:30.202 02:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:30.202 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:30.202 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:30.461 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:30.720 [2024-07-25 02:32:17.355056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:30.720 [2024-07-25 02:32:17.355488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:30.720 [2024-07-25 02:32:17.355509] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:30.720 [2024-07-25 02:32:17.355535] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:30.720 [2024-07-25 02:32:17.355542] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.720 [2024-07-25 02:32:17.355545] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x308cf9234c80 name raid_bdev1, state configuring 00:07:30.720 request: 00:07:30.720 { 00:07:30.720 "name": "raid_bdev1", 00:07:30.720 "raid_level": "concat", 00:07:30.720 "base_bdevs": [ 00:07:30.720 "malloc1", 00:07:30.720 "malloc2" 00:07:30.720 ], 00:07:30.720 "strip_size_kb": 64, 00:07:30.720 "superblock": false, 00:07:30.720 "method": "bdev_raid_create", 00:07:30.720 "req_id": 1 00:07:30.720 } 00:07:30.720 Got JSON-RPC error response 00:07:30.720 response: 00:07:30.720 { 00:07:30.720 "code": -17, 00:07:30.720 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:30.720 } 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:30.720 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.979 [2024-07-25 02:32:17.707060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.979 [2024-07-25 02:32:17.707092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.979 [2024-07-25 02:32:17.707100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x308cf9234780 00:07:30.979 [2024-07-25 02:32:17.707105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.979 [2024-07-25 02:32:17.707597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.979 [2024-07-25 02:32:17.707623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.979 [2024-07-25 02:32:17.707639] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:30.979 [2024-07-25 02:32:17.707649] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.979 pt1 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.979 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.237 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.237 "name": "raid_bdev1", 00:07:31.237 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:31.237 "strip_size_kb": 64, 00:07:31.237 "state": "configuring", 00:07:31.237 "raid_level": "concat", 00:07:31.237 "superblock": true, 00:07:31.237 "num_base_bdevs": 2, 00:07:31.237 "num_base_bdevs_discovered": 1, 00:07:31.237 "num_base_bdevs_operational": 2, 00:07:31.237 "base_bdevs_list": [ 00:07:31.237 { 00:07:31.237 "name": "pt1", 00:07:31.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.237 "is_configured": true, 00:07:31.237 "data_offset": 2048, 00:07:31.237 "data_size": 63488 00:07:31.237 }, 00:07:31.237 { 00:07:31.237 "name": null, 00:07:31.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.237 "is_configured": false, 00:07:31.237 "data_offset": 2048, 00:07:31.237 "data_size": 63488 00:07:31.237 } 00:07:31.237 ] 00:07:31.237 }' 00:07:31.237 02:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.237 02:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.496 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:31.496 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:31.496 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:31.496 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.496 [2024-07-25 02:32:18.363082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.496 [2024-07-25 02:32:18.363118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.496 [2024-07-25 02:32:18.363126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x308cf9234f00 00:07:31.496 [2024-07-25 02:32:18.363131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.496 [2024-07-25 02:32:18.363219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.496 [2024-07-25 02:32:18.363225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.496 [2024-07-25 02:32:18.363239] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.496 [2024-07-25 02:32:18.363245] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.496 [2024-07-25 02:32:18.363262] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x308cf9235180 00:07:31.496 [2024-07-25 02:32:18.363265] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.496 [2024-07-25 02:32:18.363279] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x308cf9297e20 00:07:31.496 [2024-07-25 02:32:18.363312] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x308cf9235180 00:07:31.496 [2024-07-25 02:32:18.363315] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x308cf9235180 00:07:31.496 [2024-07-25 02:32:18.363331] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.496 pt2 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.755 "name": "raid_bdev1", 00:07:31.755 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:31.755 "strip_size_kb": 64, 00:07:31.755 "state": "online", 00:07:31.755 "raid_level": "concat", 00:07:31.755 "superblock": true, 00:07:31.755 "num_base_bdevs": 2, 00:07:31.755 "num_base_bdevs_discovered": 2, 00:07:31.755 "num_base_bdevs_operational": 2, 00:07:31.755 "base_bdevs_list": [ 00:07:31.755 { 00:07:31.755 "name": "pt1", 00:07:31.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.755 "is_configured": true, 00:07:31.755 "data_offset": 2048, 00:07:31.755 "data_size": 63488 00:07:31.755 }, 00:07:31.755 { 00:07:31.755 "name": "pt2", 00:07:31.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.755 "is_configured": true, 00:07:31.755 "data_offset": 2048, 00:07:31.755 "data_size": 63488 00:07:31.755 } 00:07:31.755 ] 00:07:31.755 }' 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.755 02:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:32.014 02:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:32.272 [2024-07-25 02:32:19.027126] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.272 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:32.272 "name": "raid_bdev1", 00:07:32.272 "aliases": [ 00:07:32.272 "1ab0a660-4a2e-11ef-9c8e-7947904e2597" 00:07:32.272 ], 00:07:32.272 "product_name": "Raid Volume", 00:07:32.272 "block_size": 512, 00:07:32.272 "num_blocks": 126976, 00:07:32.272 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:32.272 "assigned_rate_limits": { 00:07:32.272 "rw_ios_per_sec": 0, 00:07:32.272 "rw_mbytes_per_sec": 0, 00:07:32.272 "r_mbytes_per_sec": 0, 00:07:32.272 "w_mbytes_per_sec": 0 00:07:32.272 }, 00:07:32.272 "claimed": false, 00:07:32.272 "zoned": false, 00:07:32.272 "supported_io_types": { 00:07:32.272 "read": true, 00:07:32.272 "write": true, 00:07:32.272 "unmap": true, 00:07:32.272 "flush": true, 00:07:32.272 "reset": true, 00:07:32.272 "nvme_admin": false, 00:07:32.272 "nvme_io": false, 00:07:32.272 "nvme_io_md": false, 00:07:32.272 "write_zeroes": true, 00:07:32.272 "zcopy": false, 00:07:32.272 "get_zone_info": false, 00:07:32.272 "zone_management": false, 00:07:32.272 "zone_append": false, 00:07:32.272 "compare": false, 00:07:32.272 "compare_and_write": false, 00:07:32.272 "abort": false, 00:07:32.272 "seek_hole": false, 00:07:32.272 "seek_data": false, 00:07:32.272 "copy": false, 00:07:32.272 "nvme_iov_md": false 00:07:32.272 }, 00:07:32.272 "memory_domains": [ 00:07:32.272 { 00:07:32.272 "dma_device_id": "system", 00:07:32.272 "dma_device_type": 1 00:07:32.272 }, 00:07:32.272 { 00:07:32.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.272 "dma_device_type": 2 00:07:32.272 }, 00:07:32.272 { 00:07:32.272 "dma_device_id": "system", 00:07:32.272 "dma_device_type": 1 00:07:32.272 }, 00:07:32.272 { 00:07:32.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.272 "dma_device_type": 2 00:07:32.272 } 00:07:32.272 ], 00:07:32.272 "driver_specific": { 00:07:32.272 "raid": { 00:07:32.272 "uuid": "1ab0a660-4a2e-11ef-9c8e-7947904e2597", 00:07:32.272 "strip_size_kb": 64, 00:07:32.272 "state": "online", 00:07:32.273 "raid_level": "concat", 00:07:32.273 "superblock": true, 00:07:32.273 "num_base_bdevs": 2, 00:07:32.273 "num_base_bdevs_discovered": 2, 00:07:32.273 "num_base_bdevs_operational": 2, 00:07:32.273 "base_bdevs_list": [ 00:07:32.273 { 00:07:32.273 "name": "pt1", 00:07:32.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.273 "is_configured": true, 00:07:32.273 "data_offset": 2048, 00:07:32.273 "data_size": 63488 00:07:32.273 }, 00:07:32.273 { 00:07:32.273 "name": "pt2", 00:07:32.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.273 "is_configured": true, 00:07:32.273 "data_offset": 2048, 00:07:32.273 "data_size": 63488 00:07:32.273 } 00:07:32.273 ] 00:07:32.273 } 00:07:32.273 } 00:07:32.273 }' 00:07:32.273 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.273 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:32.273 pt2' 00:07:32.273 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:32.273 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:32.273 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.531 "name": "pt1", 00:07:32.531 "aliases": [ 00:07:32.531 "00000000-0000-0000-0000-000000000001" 00:07:32.531 ], 00:07:32.531 "product_name": "passthru", 00:07:32.531 "block_size": 512, 00:07:32.531 "num_blocks": 65536, 00:07:32.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.531 "assigned_rate_limits": { 00:07:32.531 "rw_ios_per_sec": 0, 00:07:32.531 "rw_mbytes_per_sec": 0, 00:07:32.531 "r_mbytes_per_sec": 0, 00:07:32.531 "w_mbytes_per_sec": 0 00:07:32.531 }, 00:07:32.531 "claimed": true, 00:07:32.531 "claim_type": "exclusive_write", 00:07:32.531 "zoned": false, 00:07:32.531 "supported_io_types": { 00:07:32.531 "read": true, 00:07:32.531 "write": true, 00:07:32.531 "unmap": true, 00:07:32.531 "flush": true, 00:07:32.531 "reset": true, 00:07:32.531 "nvme_admin": false, 00:07:32.531 "nvme_io": false, 00:07:32.531 "nvme_io_md": false, 00:07:32.531 "write_zeroes": true, 00:07:32.531 "zcopy": true, 00:07:32.531 "get_zone_info": false, 00:07:32.531 "zone_management": false, 00:07:32.531 "zone_append": false, 00:07:32.531 "compare": false, 00:07:32.531 "compare_and_write": false, 00:07:32.531 "abort": true, 00:07:32.531 "seek_hole": false, 00:07:32.531 "seek_data": false, 00:07:32.531 "copy": true, 00:07:32.531 "nvme_iov_md": false 00:07:32.531 }, 00:07:32.531 "memory_domains": [ 00:07:32.531 { 00:07:32.531 "dma_device_id": "system", 00:07:32.531 "dma_device_type": 1 00:07:32.531 }, 00:07:32.531 { 00:07:32.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.531 "dma_device_type": 2 00:07:32.531 } 00:07:32.531 ], 00:07:32.531 "driver_specific": { 00:07:32.531 "passthru": { 00:07:32.531 "name": "pt1", 00:07:32.531 "base_bdev_name": "malloc1" 00:07:32.531 } 00:07:32.531 } 00:07:32.531 }' 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.531 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:32.532 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:32.791 "name": "pt2", 00:07:32.791 "aliases": [ 00:07:32.791 "00000000-0000-0000-0000-000000000002" 00:07:32.791 ], 00:07:32.791 "product_name": "passthru", 00:07:32.791 "block_size": 512, 00:07:32.791 "num_blocks": 65536, 00:07:32.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.791 "assigned_rate_limits": { 00:07:32.791 "rw_ios_per_sec": 0, 00:07:32.791 "rw_mbytes_per_sec": 0, 00:07:32.791 "r_mbytes_per_sec": 0, 00:07:32.791 "w_mbytes_per_sec": 0 00:07:32.791 }, 00:07:32.791 "claimed": true, 00:07:32.791 "claim_type": "exclusive_write", 00:07:32.791 "zoned": false, 00:07:32.791 "supported_io_types": { 00:07:32.791 "read": true, 00:07:32.791 "write": true, 00:07:32.791 "unmap": true, 00:07:32.791 "flush": true, 00:07:32.791 "reset": true, 00:07:32.791 "nvme_admin": false, 00:07:32.791 "nvme_io": false, 00:07:32.791 "nvme_io_md": false, 00:07:32.791 "write_zeroes": true, 00:07:32.791 "zcopy": true, 00:07:32.791 "get_zone_info": false, 00:07:32.791 "zone_management": false, 00:07:32.791 "zone_append": false, 00:07:32.791 "compare": false, 00:07:32.791 "compare_and_write": false, 00:07:32.791 "abort": true, 00:07:32.791 "seek_hole": false, 00:07:32.791 "seek_data": false, 00:07:32.791 "copy": true, 00:07:32.791 "nvme_iov_md": false 00:07:32.791 }, 00:07:32.791 "memory_domains": [ 00:07:32.791 { 00:07:32.791 "dma_device_id": "system", 00:07:32.791 "dma_device_type": 1 00:07:32.791 }, 00:07:32.791 { 00:07:32.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.791 "dma_device_type": 2 00:07:32.791 } 00:07:32.791 ], 00:07:32.791 "driver_specific": { 00:07:32.791 "passthru": { 00:07:32.791 "name": "pt2", 00:07:32.791 "base_bdev_name": "malloc2" 00:07:32.791 } 00:07:32.791 } 00:07:32.791 }' 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:32.791 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:33.051 [2024-07-25 02:32:19.779127] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1ab0a660-4a2e-11ef-9c8e-7947904e2597 '!=' 1ab0a660-4a2e-11ef-9c8e-7947904e2597 ']' 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50215 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50215 ']' 00:07:33.051 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50215 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50215 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:33.052 killing process with pid 50215 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50215' 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50215 00:07:33.052 [2024-07-25 02:32:19.809672] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.052 [2024-07-25 02:32:19.809688] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.052 [2024-07-25 02:32:19.809710] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.052 [2024-07-25 02:32:19.809714] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x308cf9235180 name raid_bdev1, state offline 00:07:33.052 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50215 00:07:33.052 [2024-07-25 02:32:19.819043] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.311 02:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:33.311 00:07:33.311 real 0m6.927s 00:07:33.311 user 0m11.909s 00:07:33.311 sys 0m1.285s 00:07:33.311 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.311 02:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.311 ************************************ 00:07:33.311 END TEST raid_superblock_test 00:07:33.311 ************************************ 00:07:33.311 02:32:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:33.311 02:32:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:33.311 02:32:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:33.311 02:32:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.311 02:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.311 ************************************ 00:07:33.311 START TEST raid_read_error_test 00:07:33.311 ************************************ 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XoX9DsDygT 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50476 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50476 /var/tmp/spdk-raid.sock 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50476 ']' 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:33.311 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:33.312 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:33.312 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.312 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.312 [2024-07-25 02:32:20.060984] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:33.312 [2024-07-25 02:32:20.061280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:33.877 EAL: TSC is not safe to use in SMP mode 00:07:33.877 EAL: TSC is not invariant 00:07:33.877 [2024-07-25 02:32:20.477955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.877 [2024-07-25 02:32:20.569453] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:33.877 [2024-07-25 02:32:20.571200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.877 [2024-07-25 02:32:20.571833] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.877 [2024-07-25 02:32:20.571844] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.136 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.136 02:32:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:34.136 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:34.136 02:32:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:34.394 BaseBdev1_malloc 00:07:34.394 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:34.653 true 00:07:34.653 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:34.653 [2024-07-25 02:32:21.506953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:34.653 [2024-07-25 02:32:21.507002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.653 [2024-07-25 02:32:21.507040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x284456e34780 00:07:34.653 [2024-07-25 02:32:21.507046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.653 [2024-07-25 02:32:21.507500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.653 [2024-07-25 02:32:21.507528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:34.653 BaseBdev1 00:07:34.653 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:34.653 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:34.912 BaseBdev2_malloc 00:07:34.912 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:35.170 true 00:07:35.170 02:32:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.430 [2024-07-25 02:32:22.051271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.430 [2024-07-25 02:32:22.051309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.430 [2024-07-25 02:32:22.051329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x284456e34c80 00:07:35.430 [2024-07-25 02:32:22.051335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.430 [2024-07-25 02:32:22.051780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.430 [2024-07-25 02:32:22.051807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.430 BaseBdev2 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:35.430 [2024-07-25 02:32:22.239389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.430 [2024-07-25 02:32:22.239769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.430 [2024-07-25 02:32:22.239837] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x284456e34f00 00:07:35.430 [2024-07-25 02:32:22.239842] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.430 [2024-07-25 02:32:22.239870] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x284456ea0e20 00:07:35.430 [2024-07-25 02:32:22.239920] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x284456e34f00 00:07:35.430 [2024-07-25 02:32:22.239923] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x284456e34f00 00:07:35.430 [2024-07-25 02:32:22.239940] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.430 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.697 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:35.697 "name": "raid_bdev1", 00:07:35.697 "uuid": "1f1805ed-4a2e-11ef-9c8e-7947904e2597", 00:07:35.697 "strip_size_kb": 64, 00:07:35.697 "state": "online", 00:07:35.697 "raid_level": "concat", 00:07:35.697 "superblock": true, 00:07:35.697 "num_base_bdevs": 2, 00:07:35.697 "num_base_bdevs_discovered": 2, 00:07:35.697 "num_base_bdevs_operational": 2, 00:07:35.697 "base_bdevs_list": [ 00:07:35.697 { 00:07:35.697 "name": "BaseBdev1", 00:07:35.697 "uuid": "a31f3bb8-b09e-0754-8113-bf3a489739b0", 00:07:35.697 "is_configured": true, 00:07:35.697 "data_offset": 2048, 00:07:35.697 "data_size": 63488 00:07:35.697 }, 00:07:35.697 { 00:07:35.697 "name": "BaseBdev2", 00:07:35.697 "uuid": "934a0099-67c7-ac56-88f2-3f9e06d5519a", 00:07:35.697 "is_configured": true, 00:07:35.697 "data_offset": 2048, 00:07:35.698 "data_size": 63488 00:07:35.698 } 00:07:35.698 ] 00:07:35.698 }' 00:07:35.698 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:35.698 02:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.957 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:35.957 02:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:35.957 [2024-07-25 02:32:22.807773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x284456ea0ec0 00:07:36.894 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.153 02:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.411 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.411 "name": "raid_bdev1", 00:07:37.411 "uuid": "1f1805ed-4a2e-11ef-9c8e-7947904e2597", 00:07:37.411 "strip_size_kb": 64, 00:07:37.411 "state": "online", 00:07:37.411 "raid_level": "concat", 00:07:37.411 "superblock": true, 00:07:37.411 "num_base_bdevs": 2, 00:07:37.411 "num_base_bdevs_discovered": 2, 00:07:37.411 "num_base_bdevs_operational": 2, 00:07:37.411 "base_bdevs_list": [ 00:07:37.411 { 00:07:37.411 "name": "BaseBdev1", 00:07:37.411 "uuid": "a31f3bb8-b09e-0754-8113-bf3a489739b0", 00:07:37.411 "is_configured": true, 00:07:37.411 "data_offset": 2048, 00:07:37.411 "data_size": 63488 00:07:37.411 }, 00:07:37.411 { 00:07:37.411 "name": "BaseBdev2", 00:07:37.411 "uuid": "934a0099-67c7-ac56-88f2-3f9e06d5519a", 00:07:37.411 "is_configured": true, 00:07:37.411 "data_offset": 2048, 00:07:37.411 "data_size": 63488 00:07:37.411 } 00:07:37.411 ] 00:07:37.411 }' 00:07:37.411 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.411 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.681 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:37.972 [2024-07-25 02:32:24.596816] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.972 [2024-07-25 02:32:24.596843] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.972 [2024-07-25 02:32:24.597103] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.972 [2024-07-25 02:32:24.597110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.972 [2024-07-25 02:32:24.597115] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.972 [2024-07-25 02:32:24.597118] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x284456e34f00 name raid_bdev1, state offline 00:07:37.972 0 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50476 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50476 ']' 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50476 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50476 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:37.972 killing process with pid 50476 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50476' 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50476 00:07:37.972 [2024-07-25 02:32:24.627576] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50476 00:07:37.972 [2024-07-25 02:32:24.636722] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XoX9DsDygT 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:07:37.972 00:07:37.972 real 0m4.774s 00:07:37.972 user 0m6.958s 00:07:37.972 sys 0m0.927s 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.972 02:32:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.972 ************************************ 00:07:37.972 END TEST raid_read_error_test 00:07:37.972 ************************************ 00:07:38.231 02:32:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:38.231 02:32:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.231 02:32:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:38.231 02:32:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.231 02:32:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.231 ************************************ 00:07:38.231 START TEST raid_write_error_test 00:07:38.231 ************************************ 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dHZx66BBNI 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50596 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50596 /var/tmp/spdk-raid.sock 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50596 ']' 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:38.231 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:38.232 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:38.232 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.232 02:32:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.232 [2024-07-25 02:32:24.889480] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:38.232 [2024-07-25 02:32:24.889832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:38.491 EAL: TSC is not safe to use in SMP mode 00:07:38.491 EAL: TSC is not invariant 00:07:38.491 [2024-07-25 02:32:25.309492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.748 [2024-07-25 02:32:25.401237] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:38.748 [2024-07-25 02:32:25.402961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.748 [2024-07-25 02:32:25.403624] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.748 [2024-07-25 02:32:25.403636] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.008 02:32:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.008 02:32:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:39.008 02:32:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:39.008 02:32:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.267 BaseBdev1_malloc 00:07:39.267 02:32:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:39.526 true 00:07:39.526 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.526 [2024-07-25 02:32:26.331079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.526 [2024-07-25 02:32:26.331165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.527 [2024-07-25 02:32:26.331200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30c4eec34780 00:07:39.527 [2024-07-25 02:32:26.331225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.527 [2024-07-25 02:32:26.332048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.527 [2024-07-25 02:32:26.332086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.527 BaseBdev1 00:07:39.527 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:39.527 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.786 BaseBdev2_malloc 00:07:39.786 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:40.046 true 00:07:40.046 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:40.046 [2024-07-25 02:32:26.879371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:40.046 [2024-07-25 02:32:26.879431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.046 [2024-07-25 02:32:26.879462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30c4eec34c80 00:07:40.046 [2024-07-25 02:32:26.879469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.046 [2024-07-25 02:32:26.880169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.046 [2024-07-25 02:32:26.880202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:40.046 BaseBdev2 00:07:40.046 02:32:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:40.305 [2024-07-25 02:32:27.067479] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.305 [2024-07-25 02:32:27.067764] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.305 [2024-07-25 02:32:27.067846] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x30c4eec34f00 00:07:40.305 [2024-07-25 02:32:27.067853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.305 [2024-07-25 02:32:27.067877] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30c4eeca0e20 00:07:40.305 [2024-07-25 02:32:27.067925] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30c4eec34f00 00:07:40.305 [2024-07-25 02:32:27.067929] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x30c4eec34f00 00:07:40.305 [2024-07-25 02:32:27.067947] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.305 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.564 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.564 "name": "raid_bdev1", 00:07:40.564 "uuid": "21f8bb35-4a2e-11ef-9c8e-7947904e2597", 00:07:40.564 "strip_size_kb": 64, 00:07:40.564 "state": "online", 00:07:40.564 "raid_level": "concat", 00:07:40.564 "superblock": true, 00:07:40.564 "num_base_bdevs": 2, 00:07:40.564 "num_base_bdevs_discovered": 2, 00:07:40.564 "num_base_bdevs_operational": 2, 00:07:40.564 "base_bdevs_list": [ 00:07:40.564 { 00:07:40.564 "name": "BaseBdev1", 00:07:40.564 "uuid": "7d7b250f-e8f8-5754-be7a-e04bc2032805", 00:07:40.564 "is_configured": true, 00:07:40.564 "data_offset": 2048, 00:07:40.564 "data_size": 63488 00:07:40.564 }, 00:07:40.564 { 00:07:40.564 "name": "BaseBdev2", 00:07:40.564 "uuid": "a6a09a81-b444-b852-9bf3-b037468643c1", 00:07:40.564 "is_configured": true, 00:07:40.564 "data_offset": 2048, 00:07:40.564 "data_size": 63488 00:07:40.564 } 00:07:40.564 ] 00:07:40.564 }' 00:07:40.564 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.564 02:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.823 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:40.823 02:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:40.823 [2024-07-25 02:32:27.631888] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30c4eeca0ec0 00:07:41.763 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.022 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.281 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:42.281 "name": "raid_bdev1", 00:07:42.281 "uuid": "21f8bb35-4a2e-11ef-9c8e-7947904e2597", 00:07:42.281 "strip_size_kb": 64, 00:07:42.281 "state": "online", 00:07:42.281 "raid_level": "concat", 00:07:42.281 "superblock": true, 00:07:42.281 "num_base_bdevs": 2, 00:07:42.281 "num_base_bdevs_discovered": 2, 00:07:42.281 "num_base_bdevs_operational": 2, 00:07:42.281 "base_bdevs_list": [ 00:07:42.281 { 00:07:42.281 "name": "BaseBdev1", 00:07:42.281 "uuid": "7d7b250f-e8f8-5754-be7a-e04bc2032805", 00:07:42.281 "is_configured": true, 00:07:42.281 "data_offset": 2048, 00:07:42.281 "data_size": 63488 00:07:42.281 }, 00:07:42.281 { 00:07:42.281 "name": "BaseBdev2", 00:07:42.281 "uuid": "a6a09a81-b444-b852-9bf3-b037468643c1", 00:07:42.281 "is_configured": true, 00:07:42.281 "data_offset": 2048, 00:07:42.281 "data_size": 63488 00:07:42.281 } 00:07:42.281 ] 00:07:42.281 }' 00:07:42.281 02:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:42.281 02:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:42.801 [2024-07-25 02:32:29.427169] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.801 [2024-07-25 02:32:29.427204] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.801 [2024-07-25 02:32:29.427513] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.801 [2024-07-25 02:32:29.427530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.801 [2024-07-25 02:32:29.427537] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.801 [2024-07-25 02:32:29.427541] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30c4eec34f00 name raid_bdev1, state offline 00:07:42.801 0 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50596 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50596 ']' 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50596 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50596 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:42.801 killing process with pid 50596 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50596' 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50596 00:07:42.801 [2024-07-25 02:32:29.457317] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.801 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50596 00:07:42.801 [2024-07-25 02:32:29.473949] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dHZx66BBNI 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:07:43.060 00:07:43.060 real 0m4.884s 00:07:43.060 user 0m6.952s 00:07:43.060 sys 0m0.965s 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.060 02:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.060 ************************************ 00:07:43.060 END TEST raid_write_error_test 00:07:43.060 ************************************ 00:07:43.060 02:32:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:43.060 02:32:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:43.060 02:32:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.060 02:32:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:43.060 02:32:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.060 02:32:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.060 ************************************ 00:07:43.060 START TEST raid_state_function_test 00:07:43.060 ************************************ 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:43.060 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50718 00:07:43.061 Process raid pid: 50718 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50718' 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50718 /var/tmp/spdk-raid.sock 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50718 ']' 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.061 02:32:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.061 [2024-07-25 02:32:29.834766] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:43.061 [2024-07-25 02:32:29.835034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:44.001 EAL: TSC is not safe to use in SMP mode 00:07:44.001 EAL: TSC is not invariant 00:07:44.001 [2024-07-25 02:32:30.567472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.001 [2024-07-25 02:32:30.660113] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:44.001 [2024-07-25 02:32:30.661807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.001 [2024-07-25 02:32:30.662395] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.001 [2024-07-25 02:32:30.662406] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.001 02:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.001 02:32:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:44.001 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:44.261 [2024-07-25 02:32:30.897400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.261 [2024-07-25 02:32:30.897437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.261 [2024-07-25 02:32:30.897456] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.261 [2024-07-25 02:32:30.897462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.261 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.262 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.262 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.262 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.262 02:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.262 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.262 "name": "Existed_Raid", 00:07:44.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.262 "strip_size_kb": 0, 00:07:44.262 "state": "configuring", 00:07:44.262 "raid_level": "raid1", 00:07:44.262 "superblock": false, 00:07:44.262 "num_base_bdevs": 2, 00:07:44.262 "num_base_bdevs_discovered": 0, 00:07:44.262 "num_base_bdevs_operational": 2, 00:07:44.262 "base_bdevs_list": [ 00:07:44.262 { 00:07:44.262 "name": "BaseBdev1", 00:07:44.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.262 "is_configured": false, 00:07:44.262 "data_offset": 0, 00:07:44.262 "data_size": 0 00:07:44.262 }, 00:07:44.262 { 00:07:44.262 "name": "BaseBdev2", 00:07:44.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.262 "is_configured": false, 00:07:44.262 "data_offset": 0, 00:07:44.262 "data_size": 0 00:07:44.262 } 00:07:44.262 ] 00:07:44.262 }' 00:07:44.262 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.262 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.521 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:44.781 [2024-07-25 02:32:31.529769] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.781 [2024-07-25 02:32:31.529781] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1396c234500 name Existed_Raid, state configuring 00:07:44.781 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:45.041 [2024-07-25 02:32:31.713886] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.041 [2024-07-25 02:32:31.713916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.041 [2024-07-25 02:32:31.713919] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.041 [2024-07-25 02:32:31.713924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.041 [2024-07-25 02:32:31.898760] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.041 BaseBdev1 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:45.041 02:32:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:45.301 02:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.560 [ 00:07:45.560 { 00:07:45.560 "name": "BaseBdev1", 00:07:45.560 "aliases": [ 00:07:45.560 "24d9cfba-4a2e-11ef-9c8e-7947904e2597" 00:07:45.560 ], 00:07:45.560 "product_name": "Malloc disk", 00:07:45.560 "block_size": 512, 00:07:45.560 "num_blocks": 65536, 00:07:45.560 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:45.560 "assigned_rate_limits": { 00:07:45.560 "rw_ios_per_sec": 0, 00:07:45.560 "rw_mbytes_per_sec": 0, 00:07:45.560 "r_mbytes_per_sec": 0, 00:07:45.560 "w_mbytes_per_sec": 0 00:07:45.560 }, 00:07:45.560 "claimed": true, 00:07:45.560 "claim_type": "exclusive_write", 00:07:45.560 "zoned": false, 00:07:45.560 "supported_io_types": { 00:07:45.560 "read": true, 00:07:45.560 "write": true, 00:07:45.560 "unmap": true, 00:07:45.560 "flush": true, 00:07:45.560 "reset": true, 00:07:45.560 "nvme_admin": false, 00:07:45.560 "nvme_io": false, 00:07:45.560 "nvme_io_md": false, 00:07:45.560 "write_zeroes": true, 00:07:45.560 "zcopy": true, 00:07:45.560 "get_zone_info": false, 00:07:45.560 "zone_management": false, 00:07:45.560 "zone_append": false, 00:07:45.560 "compare": false, 00:07:45.560 "compare_and_write": false, 00:07:45.560 "abort": true, 00:07:45.560 "seek_hole": false, 00:07:45.560 "seek_data": false, 00:07:45.560 "copy": true, 00:07:45.560 "nvme_iov_md": false 00:07:45.560 }, 00:07:45.560 "memory_domains": [ 00:07:45.560 { 00:07:45.560 "dma_device_id": "system", 00:07:45.560 "dma_device_type": 1 00:07:45.561 }, 00:07:45.561 { 00:07:45.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.561 "dma_device_type": 2 00:07:45.561 } 00:07:45.561 ], 00:07:45.561 "driver_specific": {} 00:07:45.561 } 00:07:45.561 ] 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:45.561 "name": "Existed_Raid", 00:07:45.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.561 "strip_size_kb": 0, 00:07:45.561 "state": "configuring", 00:07:45.561 "raid_level": "raid1", 00:07:45.561 "superblock": false, 00:07:45.561 "num_base_bdevs": 2, 00:07:45.561 "num_base_bdevs_discovered": 1, 00:07:45.561 "num_base_bdevs_operational": 2, 00:07:45.561 "base_bdevs_list": [ 00:07:45.561 { 00:07:45.561 "name": "BaseBdev1", 00:07:45.561 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:45.561 "is_configured": true, 00:07:45.561 "data_offset": 0, 00:07:45.561 "data_size": 65536 00:07:45.561 }, 00:07:45.561 { 00:07:45.561 "name": "BaseBdev2", 00:07:45.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.561 "is_configured": false, 00:07:45.561 "data_offset": 0, 00:07:45.561 "data_size": 0 00:07:45.561 } 00:07:45.561 ] 00:07:45.561 }' 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:45.561 02:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.820 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:46.080 [2024-07-25 02:32:32.854560] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.080 [2024-07-25 02:32:32.854578] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1396c234500 name Existed_Raid, state configuring 00:07:46.080 02:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:46.338 [2024-07-25 02:32:33.034673] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.338 [2024-07-25 02:32:33.035247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.338 [2024-07-25 02:32:33.035283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.338 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.597 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:46.597 "name": "Existed_Raid", 00:07:46.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.597 "strip_size_kb": 0, 00:07:46.597 "state": "configuring", 00:07:46.597 "raid_level": "raid1", 00:07:46.597 "superblock": false, 00:07:46.597 "num_base_bdevs": 2, 00:07:46.597 "num_base_bdevs_discovered": 1, 00:07:46.597 "num_base_bdevs_operational": 2, 00:07:46.597 "base_bdevs_list": [ 00:07:46.597 { 00:07:46.597 "name": "BaseBdev1", 00:07:46.597 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:46.597 "is_configured": true, 00:07:46.597 "data_offset": 0, 00:07:46.597 "data_size": 65536 00:07:46.597 }, 00:07:46.597 { 00:07:46.597 "name": "BaseBdev2", 00:07:46.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.597 "is_configured": false, 00:07:46.597 "data_offset": 0, 00:07:46.597 "data_size": 0 00:07:46.597 } 00:07:46.597 ] 00:07:46.597 }' 00:07:46.597 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:46.597 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.857 [2024-07-25 02:32:33.671138] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.857 [2024-07-25 02:32:33.671152] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1396c234a00 00:07:46.857 [2024-07-25 02:32:33.671155] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.857 [2024-07-25 02:32:33.671170] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1396c297e20 00:07:46.857 [2024-07-25 02:32:33.671234] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1396c234a00 00:07:46.857 [2024-07-25 02:32:33.671237] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1396c234a00 00:07:46.857 [2024-07-25 02:32:33.671261] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.857 BaseBdev2 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:46.857 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:47.120 02:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.383 [ 00:07:47.383 { 00:07:47.383 "name": "BaseBdev2", 00:07:47.383 "aliases": [ 00:07:47.383 "25e85b71-4a2e-11ef-9c8e-7947904e2597" 00:07:47.383 ], 00:07:47.383 "product_name": "Malloc disk", 00:07:47.383 "block_size": 512, 00:07:47.383 "num_blocks": 65536, 00:07:47.383 "uuid": "25e85b71-4a2e-11ef-9c8e-7947904e2597", 00:07:47.383 "assigned_rate_limits": { 00:07:47.383 "rw_ios_per_sec": 0, 00:07:47.383 "rw_mbytes_per_sec": 0, 00:07:47.383 "r_mbytes_per_sec": 0, 00:07:47.383 "w_mbytes_per_sec": 0 00:07:47.383 }, 00:07:47.383 "claimed": true, 00:07:47.383 "claim_type": "exclusive_write", 00:07:47.383 "zoned": false, 00:07:47.383 "supported_io_types": { 00:07:47.383 "read": true, 00:07:47.383 "write": true, 00:07:47.383 "unmap": true, 00:07:47.383 "flush": true, 00:07:47.383 "reset": true, 00:07:47.383 "nvme_admin": false, 00:07:47.383 "nvme_io": false, 00:07:47.383 "nvme_io_md": false, 00:07:47.383 "write_zeroes": true, 00:07:47.383 "zcopy": true, 00:07:47.383 "get_zone_info": false, 00:07:47.383 "zone_management": false, 00:07:47.383 "zone_append": false, 00:07:47.383 "compare": false, 00:07:47.383 "compare_and_write": false, 00:07:47.383 "abort": true, 00:07:47.383 "seek_hole": false, 00:07:47.383 "seek_data": false, 00:07:47.383 "copy": true, 00:07:47.383 "nvme_iov_md": false 00:07:47.383 }, 00:07:47.383 "memory_domains": [ 00:07:47.383 { 00:07:47.383 "dma_device_id": "system", 00:07:47.383 "dma_device_type": 1 00:07:47.383 }, 00:07:47.383 { 00:07:47.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.383 "dma_device_type": 2 00:07:47.383 } 00:07:47.383 ], 00:07:47.383 "driver_specific": {} 00:07:47.383 } 00:07:47.383 ] 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.383 "name": "Existed_Raid", 00:07:47.383 "uuid": "25e85fa2-4a2e-11ef-9c8e-7947904e2597", 00:07:47.383 "strip_size_kb": 0, 00:07:47.383 "state": "online", 00:07:47.383 "raid_level": "raid1", 00:07:47.383 "superblock": false, 00:07:47.383 "num_base_bdevs": 2, 00:07:47.383 "num_base_bdevs_discovered": 2, 00:07:47.383 "num_base_bdevs_operational": 2, 00:07:47.383 "base_bdevs_list": [ 00:07:47.383 { 00:07:47.383 "name": "BaseBdev1", 00:07:47.383 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:47.383 "is_configured": true, 00:07:47.383 "data_offset": 0, 00:07:47.383 "data_size": 65536 00:07:47.383 }, 00:07:47.383 { 00:07:47.383 "name": "BaseBdev2", 00:07:47.383 "uuid": "25e85b71-4a2e-11ef-9c8e-7947904e2597", 00:07:47.383 "is_configured": true, 00:07:47.383 "data_offset": 0, 00:07:47.383 "data_size": 65536 00:07:47.383 } 00:07:47.383 ] 00:07:47.383 }' 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.383 02:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.643 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:47.644 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:47.903 [2024-07-25 02:32:34.683683] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.903 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:47.903 "name": "Existed_Raid", 00:07:47.903 "aliases": [ 00:07:47.903 "25e85fa2-4a2e-11ef-9c8e-7947904e2597" 00:07:47.903 ], 00:07:47.903 "product_name": "Raid Volume", 00:07:47.903 "block_size": 512, 00:07:47.903 "num_blocks": 65536, 00:07:47.903 "uuid": "25e85fa2-4a2e-11ef-9c8e-7947904e2597", 00:07:47.903 "assigned_rate_limits": { 00:07:47.903 "rw_ios_per_sec": 0, 00:07:47.903 "rw_mbytes_per_sec": 0, 00:07:47.903 "r_mbytes_per_sec": 0, 00:07:47.903 "w_mbytes_per_sec": 0 00:07:47.903 }, 00:07:47.903 "claimed": false, 00:07:47.903 "zoned": false, 00:07:47.903 "supported_io_types": { 00:07:47.903 "read": true, 00:07:47.903 "write": true, 00:07:47.903 "unmap": false, 00:07:47.903 "flush": false, 00:07:47.903 "reset": true, 00:07:47.903 "nvme_admin": false, 00:07:47.903 "nvme_io": false, 00:07:47.903 "nvme_io_md": false, 00:07:47.903 "write_zeroes": true, 00:07:47.903 "zcopy": false, 00:07:47.903 "get_zone_info": false, 00:07:47.903 "zone_management": false, 00:07:47.903 "zone_append": false, 00:07:47.903 "compare": false, 00:07:47.903 "compare_and_write": false, 00:07:47.903 "abort": false, 00:07:47.903 "seek_hole": false, 00:07:47.903 "seek_data": false, 00:07:47.903 "copy": false, 00:07:47.903 "nvme_iov_md": false 00:07:47.903 }, 00:07:47.903 "memory_domains": [ 00:07:47.903 { 00:07:47.903 "dma_device_id": "system", 00:07:47.903 "dma_device_type": 1 00:07:47.903 }, 00:07:47.903 { 00:07:47.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.903 "dma_device_type": 2 00:07:47.903 }, 00:07:47.903 { 00:07:47.903 "dma_device_id": "system", 00:07:47.903 "dma_device_type": 1 00:07:47.903 }, 00:07:47.903 { 00:07:47.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.903 "dma_device_type": 2 00:07:47.903 } 00:07:47.903 ], 00:07:47.903 "driver_specific": { 00:07:47.903 "raid": { 00:07:47.903 "uuid": "25e85fa2-4a2e-11ef-9c8e-7947904e2597", 00:07:47.903 "strip_size_kb": 0, 00:07:47.903 "state": "online", 00:07:47.903 "raid_level": "raid1", 00:07:47.903 "superblock": false, 00:07:47.903 "num_base_bdevs": 2, 00:07:47.903 "num_base_bdevs_discovered": 2, 00:07:47.903 "num_base_bdevs_operational": 2, 00:07:47.903 "base_bdevs_list": [ 00:07:47.903 { 00:07:47.903 "name": "BaseBdev1", 00:07:47.903 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:47.903 "is_configured": true, 00:07:47.903 "data_offset": 0, 00:07:47.903 "data_size": 65536 00:07:47.903 }, 00:07:47.903 { 00:07:47.903 "name": "BaseBdev2", 00:07:47.903 "uuid": "25e85b71-4a2e-11ef-9c8e-7947904e2597", 00:07:47.903 "is_configured": true, 00:07:47.903 "data_offset": 0, 00:07:47.903 "data_size": 65536 00:07:47.903 } 00:07:47.903 ] 00:07:47.903 } 00:07:47.903 } 00:07:47.903 }' 00:07:47.904 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.904 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:47.904 BaseBdev2' 00:07:47.904 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:47.904 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:47.904 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:48.163 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:48.163 "name": "BaseBdev1", 00:07:48.163 "aliases": [ 00:07:48.163 "24d9cfba-4a2e-11ef-9c8e-7947904e2597" 00:07:48.163 ], 00:07:48.163 "product_name": "Malloc disk", 00:07:48.163 "block_size": 512, 00:07:48.163 "num_blocks": 65536, 00:07:48.163 "uuid": "24d9cfba-4a2e-11ef-9c8e-7947904e2597", 00:07:48.163 "assigned_rate_limits": { 00:07:48.163 "rw_ios_per_sec": 0, 00:07:48.163 "rw_mbytes_per_sec": 0, 00:07:48.163 "r_mbytes_per_sec": 0, 00:07:48.163 "w_mbytes_per_sec": 0 00:07:48.163 }, 00:07:48.163 "claimed": true, 00:07:48.163 "claim_type": "exclusive_write", 00:07:48.163 "zoned": false, 00:07:48.163 "supported_io_types": { 00:07:48.163 "read": true, 00:07:48.163 "write": true, 00:07:48.163 "unmap": true, 00:07:48.163 "flush": true, 00:07:48.163 "reset": true, 00:07:48.163 "nvme_admin": false, 00:07:48.163 "nvme_io": false, 00:07:48.163 "nvme_io_md": false, 00:07:48.163 "write_zeroes": true, 00:07:48.163 "zcopy": true, 00:07:48.163 "get_zone_info": false, 00:07:48.163 "zone_management": false, 00:07:48.163 "zone_append": false, 00:07:48.163 "compare": false, 00:07:48.163 "compare_and_write": false, 00:07:48.163 "abort": true, 00:07:48.163 "seek_hole": false, 00:07:48.163 "seek_data": false, 00:07:48.163 "copy": true, 00:07:48.163 "nvme_iov_md": false 00:07:48.163 }, 00:07:48.164 "memory_domains": [ 00:07:48.164 { 00:07:48.164 "dma_device_id": "system", 00:07:48.164 "dma_device_type": 1 00:07:48.164 }, 00:07:48.164 { 00:07:48.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.164 "dma_device_type": 2 00:07:48.164 } 00:07:48.164 ], 00:07:48.164 "driver_specific": {} 00:07:48.164 }' 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:48.164 02:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:48.424 "name": "BaseBdev2", 00:07:48.424 "aliases": [ 00:07:48.424 "25e85b71-4a2e-11ef-9c8e-7947904e2597" 00:07:48.424 ], 00:07:48.424 "product_name": "Malloc disk", 00:07:48.424 "block_size": 512, 00:07:48.424 "num_blocks": 65536, 00:07:48.424 "uuid": "25e85b71-4a2e-11ef-9c8e-7947904e2597", 00:07:48.424 "assigned_rate_limits": { 00:07:48.424 "rw_ios_per_sec": 0, 00:07:48.424 "rw_mbytes_per_sec": 0, 00:07:48.424 "r_mbytes_per_sec": 0, 00:07:48.424 "w_mbytes_per_sec": 0 00:07:48.424 }, 00:07:48.424 "claimed": true, 00:07:48.424 "claim_type": "exclusive_write", 00:07:48.424 "zoned": false, 00:07:48.424 "supported_io_types": { 00:07:48.424 "read": true, 00:07:48.424 "write": true, 00:07:48.424 "unmap": true, 00:07:48.424 "flush": true, 00:07:48.424 "reset": true, 00:07:48.424 "nvme_admin": false, 00:07:48.424 "nvme_io": false, 00:07:48.424 "nvme_io_md": false, 00:07:48.424 "write_zeroes": true, 00:07:48.424 "zcopy": true, 00:07:48.424 "get_zone_info": false, 00:07:48.424 "zone_management": false, 00:07:48.424 "zone_append": false, 00:07:48.424 "compare": false, 00:07:48.424 "compare_and_write": false, 00:07:48.424 "abort": true, 00:07:48.424 "seek_hole": false, 00:07:48.424 "seek_data": false, 00:07:48.424 "copy": true, 00:07:48.424 "nvme_iov_md": false 00:07:48.424 }, 00:07:48.424 "memory_domains": [ 00:07:48.424 { 00:07:48.424 "dma_device_id": "system", 00:07:48.424 "dma_device_type": 1 00:07:48.424 }, 00:07:48.424 { 00:07:48.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.424 "dma_device_type": 2 00:07:48.424 } 00:07:48.424 ], 00:07:48.424 "driver_specific": {} 00:07:48.424 }' 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:48.424 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:48.684 [2024-07-25 02:32:35.436123] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.684 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.944 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:48.944 "name": "Existed_Raid", 00:07:48.944 "uuid": "25e85fa2-4a2e-11ef-9c8e-7947904e2597", 00:07:48.944 "strip_size_kb": 0, 00:07:48.944 "state": "online", 00:07:48.944 "raid_level": "raid1", 00:07:48.944 "superblock": false, 00:07:48.944 "num_base_bdevs": 2, 00:07:48.944 "num_base_bdevs_discovered": 1, 00:07:48.944 "num_base_bdevs_operational": 1, 00:07:48.944 "base_bdevs_list": [ 00:07:48.944 { 00:07:48.944 "name": null, 00:07:48.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.944 "is_configured": false, 00:07:48.944 "data_offset": 0, 00:07:48.944 "data_size": 65536 00:07:48.944 }, 00:07:48.944 { 00:07:48.944 "name": "BaseBdev2", 00:07:48.944 "uuid": "25e85b71-4a2e-11ef-9c8e-7947904e2597", 00:07:48.944 "is_configured": true, 00:07:48.944 "data_offset": 0, 00:07:48.944 "data_size": 65536 00:07:48.944 } 00:07:48.944 ] 00:07:48.944 }' 00:07:48.944 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:48.944 02:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.203 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:49.203 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:49.203 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.203 02:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:49.462 [2024-07-25 02:32:36.261209] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.462 [2024-07-25 02:32:36.261231] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.462 [2024-07-25 02:32:36.265848] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.462 [2024-07-25 02:32:36.265859] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.462 [2024-07-25 02:32:36.265862] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1396c234a00 name Existed_Raid, state offline 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.462 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50718 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50718 ']' 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50718 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50718 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:49.722 killing process with pid 50718 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:49.722 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:49.723 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50718' 00:07:49.723 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50718 00:07:49.723 [2024-07-25 02:32:36.474716] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.723 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50718 00:07:49.723 [2024-07-25 02:32:36.474750] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:49.982 00:07:49.982 real 0m6.834s 00:07:49.982 user 0m11.245s 00:07:49.982 sys 0m1.717s 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.982 ************************************ 00:07:49.982 END TEST raid_state_function_test 00:07:49.982 ************************************ 00:07:49.982 02:32:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:49.982 02:32:36 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:49.982 02:32:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:49.982 02:32:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.982 02:32:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.982 ************************************ 00:07:49.982 START TEST raid_state_function_test_sb 00:07:49.982 ************************************ 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=50981 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50981' 00:07:49.982 Process raid pid: 50981 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 50981 /var/tmp/spdk-raid.sock 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 50981 ']' 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.982 02:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.982 [2024-07-25 02:32:36.733322] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:49.982 [2024-07-25 02:32:36.733628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:50.550 EAL: TSC is not safe to use in SMP mode 00:07:50.550 EAL: TSC is not invariant 00:07:50.551 [2024-07-25 02:32:37.153341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.551 [2024-07-25 02:32:37.245169] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:50.551 [2024-07-25 02:32:37.247147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.551 [2024-07-25 02:32:37.247750] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.551 [2024-07-25 02:32:37.247761] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.810 02:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.810 02:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:50.810 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.070 [2024-07-25 02:32:37.806896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.070 [2024-07-25 02:32:37.806930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.070 [2024-07-25 02:32:37.806950] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.070 [2024-07-25 02:32:37.806956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.070 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.329 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:51.329 "name": "Existed_Raid", 00:07:51.330 "uuid": "285f6fb3-4a2e-11ef-9c8e-7947904e2597", 00:07:51.330 "strip_size_kb": 0, 00:07:51.330 "state": "configuring", 00:07:51.330 "raid_level": "raid1", 00:07:51.330 "superblock": true, 00:07:51.330 "num_base_bdevs": 2, 00:07:51.330 "num_base_bdevs_discovered": 0, 00:07:51.330 "num_base_bdevs_operational": 2, 00:07:51.330 "base_bdevs_list": [ 00:07:51.330 { 00:07:51.330 "name": "BaseBdev1", 00:07:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.330 "is_configured": false, 00:07:51.330 "data_offset": 0, 00:07:51.330 "data_size": 0 00:07:51.330 }, 00:07:51.330 { 00:07:51.330 "name": "BaseBdev2", 00:07:51.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.330 "is_configured": false, 00:07:51.330 "data_offset": 0, 00:07:51.330 "data_size": 0 00:07:51.330 } 00:07:51.330 ] 00:07:51.330 }' 00:07:51.330 02:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:51.330 02:32:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.589 02:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:51.589 [2024-07-25 02:32:38.419233] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.589 [2024-07-25 02:32:38.419246] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34b4ef234500 name Existed_Raid, state configuring 00:07:51.589 02:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.849 [2024-07-25 02:32:38.603345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.849 [2024-07-25 02:32:38.603372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.849 [2024-07-25 02:32:38.603375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.849 [2024-07-25 02:32:38.603380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.849 02:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.109 [2024-07-25 02:32:38.788193] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.109 BaseBdev1 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:52.109 02:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.370 [ 00:07:52.370 { 00:07:52.370 "name": "BaseBdev1", 00:07:52.370 "aliases": [ 00:07:52.370 "28f50f89-4a2e-11ef-9c8e-7947904e2597" 00:07:52.370 ], 00:07:52.370 "product_name": "Malloc disk", 00:07:52.370 "block_size": 512, 00:07:52.370 "num_blocks": 65536, 00:07:52.370 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:52.370 "assigned_rate_limits": { 00:07:52.370 "rw_ios_per_sec": 0, 00:07:52.370 "rw_mbytes_per_sec": 0, 00:07:52.370 "r_mbytes_per_sec": 0, 00:07:52.370 "w_mbytes_per_sec": 0 00:07:52.370 }, 00:07:52.370 "claimed": true, 00:07:52.370 "claim_type": "exclusive_write", 00:07:52.370 "zoned": false, 00:07:52.370 "supported_io_types": { 00:07:52.370 "read": true, 00:07:52.370 "write": true, 00:07:52.370 "unmap": true, 00:07:52.370 "flush": true, 00:07:52.370 "reset": true, 00:07:52.370 "nvme_admin": false, 00:07:52.370 "nvme_io": false, 00:07:52.370 "nvme_io_md": false, 00:07:52.370 "write_zeroes": true, 00:07:52.370 "zcopy": true, 00:07:52.370 "get_zone_info": false, 00:07:52.370 "zone_management": false, 00:07:52.370 "zone_append": false, 00:07:52.370 "compare": false, 00:07:52.370 "compare_and_write": false, 00:07:52.370 "abort": true, 00:07:52.370 "seek_hole": false, 00:07:52.370 "seek_data": false, 00:07:52.370 "copy": true, 00:07:52.370 "nvme_iov_md": false 00:07:52.370 }, 00:07:52.370 "memory_domains": [ 00:07:52.370 { 00:07:52.370 "dma_device_id": "system", 00:07:52.370 "dma_device_type": 1 00:07:52.370 }, 00:07:52.370 { 00:07:52.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.370 "dma_device_type": 2 00:07:52.370 } 00:07:52.370 ], 00:07:52.370 "driver_specific": {} 00:07:52.370 } 00:07:52.370 ] 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.370 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.629 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:52.629 "name": "Existed_Raid", 00:07:52.629 "uuid": "28d8f721-4a2e-11ef-9c8e-7947904e2597", 00:07:52.629 "strip_size_kb": 0, 00:07:52.629 "state": "configuring", 00:07:52.629 "raid_level": "raid1", 00:07:52.629 "superblock": true, 00:07:52.629 "num_base_bdevs": 2, 00:07:52.629 "num_base_bdevs_discovered": 1, 00:07:52.629 "num_base_bdevs_operational": 2, 00:07:52.629 "base_bdevs_list": [ 00:07:52.629 { 00:07:52.629 "name": "BaseBdev1", 00:07:52.629 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:52.629 "is_configured": true, 00:07:52.629 "data_offset": 2048, 00:07:52.629 "data_size": 63488 00:07:52.629 }, 00:07:52.629 { 00:07:52.629 "name": "BaseBdev2", 00:07:52.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.629 "is_configured": false, 00:07:52.629 "data_offset": 0, 00:07:52.629 "data_size": 0 00:07:52.629 } 00:07:52.629 ] 00:07:52.629 }' 00:07:52.629 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:52.629 02:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.888 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:52.889 [2024-07-25 02:32:39.740022] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.889 [2024-07-25 02:32:39.740039] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34b4ef234500 name Existed_Raid, state configuring 00:07:52.889 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:53.148 [2024-07-25 02:32:39.920137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.148 [2024-07-25 02:32:39.920720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.148 [2024-07-25 02:32:39.920751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.148 02:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.408 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:53.408 "name": "Existed_Raid", 00:07:53.408 "uuid": "29a1e43e-4a2e-11ef-9c8e-7947904e2597", 00:07:53.408 "strip_size_kb": 0, 00:07:53.408 "state": "configuring", 00:07:53.408 "raid_level": "raid1", 00:07:53.408 "superblock": true, 00:07:53.408 "num_base_bdevs": 2, 00:07:53.408 "num_base_bdevs_discovered": 1, 00:07:53.408 "num_base_bdevs_operational": 2, 00:07:53.408 "base_bdevs_list": [ 00:07:53.408 { 00:07:53.408 "name": "BaseBdev1", 00:07:53.408 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:53.408 "is_configured": true, 00:07:53.408 "data_offset": 2048, 00:07:53.408 "data_size": 63488 00:07:53.408 }, 00:07:53.408 { 00:07:53.408 "name": "BaseBdev2", 00:07:53.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.408 "is_configured": false, 00:07:53.408 "data_offset": 0, 00:07:53.408 "data_size": 0 00:07:53.408 } 00:07:53.408 ] 00:07:53.408 }' 00:07:53.408 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:53.408 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.667 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:53.667 [2024-07-25 02:32:40.552608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.668 [2024-07-25 02:32:40.552650] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x34b4ef234a00 00:07:53.668 [2024-07-25 02:32:40.552654] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.668 [2024-07-25 02:32:40.552672] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34b4ef297e20 00:07:53.668 [2024-07-25 02:32:40.552702] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34b4ef234a00 00:07:53.668 [2024-07-25 02:32:40.552705] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x34b4ef234a00 00:07:53.668 [2024-07-25 02:32:40.552720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.928 BaseBdev2 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:53.928 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.188 [ 00:07:54.188 { 00:07:54.188 "name": "BaseBdev2", 00:07:54.188 "aliases": [ 00:07:54.188 "2a026285-4a2e-11ef-9c8e-7947904e2597" 00:07:54.188 ], 00:07:54.188 "product_name": "Malloc disk", 00:07:54.188 "block_size": 512, 00:07:54.188 "num_blocks": 65536, 00:07:54.188 "uuid": "2a026285-4a2e-11ef-9c8e-7947904e2597", 00:07:54.188 "assigned_rate_limits": { 00:07:54.188 "rw_ios_per_sec": 0, 00:07:54.188 "rw_mbytes_per_sec": 0, 00:07:54.188 "r_mbytes_per_sec": 0, 00:07:54.188 "w_mbytes_per_sec": 0 00:07:54.188 }, 00:07:54.188 "claimed": true, 00:07:54.188 "claim_type": "exclusive_write", 00:07:54.188 "zoned": false, 00:07:54.188 "supported_io_types": { 00:07:54.188 "read": true, 00:07:54.188 "write": true, 00:07:54.188 "unmap": true, 00:07:54.188 "flush": true, 00:07:54.188 "reset": true, 00:07:54.188 "nvme_admin": false, 00:07:54.188 "nvme_io": false, 00:07:54.188 "nvme_io_md": false, 00:07:54.188 "write_zeroes": true, 00:07:54.188 "zcopy": true, 00:07:54.188 "get_zone_info": false, 00:07:54.188 "zone_management": false, 00:07:54.188 "zone_append": false, 00:07:54.188 "compare": false, 00:07:54.188 "compare_and_write": false, 00:07:54.188 "abort": true, 00:07:54.188 "seek_hole": false, 00:07:54.188 "seek_data": false, 00:07:54.188 "copy": true, 00:07:54.188 "nvme_iov_md": false 00:07:54.188 }, 00:07:54.188 "memory_domains": [ 00:07:54.188 { 00:07:54.188 "dma_device_id": "system", 00:07:54.188 "dma_device_type": 1 00:07:54.188 }, 00:07:54.188 { 00:07:54.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.188 "dma_device_type": 2 00:07:54.188 } 00:07:54.188 ], 00:07:54.188 "driver_specific": {} 00:07:54.188 } 00:07:54.188 ] 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.188 02:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.448 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.448 "name": "Existed_Raid", 00:07:54.448 "uuid": "29a1e43e-4a2e-11ef-9c8e-7947904e2597", 00:07:54.448 "strip_size_kb": 0, 00:07:54.448 "state": "online", 00:07:54.448 "raid_level": "raid1", 00:07:54.448 "superblock": true, 00:07:54.448 "num_base_bdevs": 2, 00:07:54.448 "num_base_bdevs_discovered": 2, 00:07:54.448 "num_base_bdevs_operational": 2, 00:07:54.448 "base_bdevs_list": [ 00:07:54.448 { 00:07:54.448 "name": "BaseBdev1", 00:07:54.448 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:54.448 "is_configured": true, 00:07:54.448 "data_offset": 2048, 00:07:54.448 "data_size": 63488 00:07:54.448 }, 00:07:54.448 { 00:07:54.448 "name": "BaseBdev2", 00:07:54.448 "uuid": "2a026285-4a2e-11ef-9c8e-7947904e2597", 00:07:54.448 "is_configured": true, 00:07:54.448 "data_offset": 2048, 00:07:54.448 "data_size": 63488 00:07:54.448 } 00:07:54.448 ] 00:07:54.448 }' 00:07:54.448 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.448 02:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:54.708 [2024-07-25 02:32:41.549111] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.708 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:54.708 "name": "Existed_Raid", 00:07:54.708 "aliases": [ 00:07:54.708 "29a1e43e-4a2e-11ef-9c8e-7947904e2597" 00:07:54.708 ], 00:07:54.708 "product_name": "Raid Volume", 00:07:54.708 "block_size": 512, 00:07:54.708 "num_blocks": 63488, 00:07:54.708 "uuid": "29a1e43e-4a2e-11ef-9c8e-7947904e2597", 00:07:54.708 "assigned_rate_limits": { 00:07:54.708 "rw_ios_per_sec": 0, 00:07:54.708 "rw_mbytes_per_sec": 0, 00:07:54.708 "r_mbytes_per_sec": 0, 00:07:54.708 "w_mbytes_per_sec": 0 00:07:54.708 }, 00:07:54.708 "claimed": false, 00:07:54.708 "zoned": false, 00:07:54.708 "supported_io_types": { 00:07:54.708 "read": true, 00:07:54.708 "write": true, 00:07:54.708 "unmap": false, 00:07:54.708 "flush": false, 00:07:54.708 "reset": true, 00:07:54.708 "nvme_admin": false, 00:07:54.708 "nvme_io": false, 00:07:54.708 "nvme_io_md": false, 00:07:54.708 "write_zeroes": true, 00:07:54.708 "zcopy": false, 00:07:54.708 "get_zone_info": false, 00:07:54.708 "zone_management": false, 00:07:54.708 "zone_append": false, 00:07:54.709 "compare": false, 00:07:54.709 "compare_and_write": false, 00:07:54.709 "abort": false, 00:07:54.709 "seek_hole": false, 00:07:54.709 "seek_data": false, 00:07:54.709 "copy": false, 00:07:54.709 "nvme_iov_md": false 00:07:54.709 }, 00:07:54.709 "memory_domains": [ 00:07:54.709 { 00:07:54.709 "dma_device_id": "system", 00:07:54.709 "dma_device_type": 1 00:07:54.709 }, 00:07:54.709 { 00:07:54.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.709 "dma_device_type": 2 00:07:54.709 }, 00:07:54.709 { 00:07:54.709 "dma_device_id": "system", 00:07:54.709 "dma_device_type": 1 00:07:54.709 }, 00:07:54.709 { 00:07:54.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.709 "dma_device_type": 2 00:07:54.709 } 00:07:54.709 ], 00:07:54.709 "driver_specific": { 00:07:54.709 "raid": { 00:07:54.709 "uuid": "29a1e43e-4a2e-11ef-9c8e-7947904e2597", 00:07:54.709 "strip_size_kb": 0, 00:07:54.709 "state": "online", 00:07:54.709 "raid_level": "raid1", 00:07:54.709 "superblock": true, 00:07:54.709 "num_base_bdevs": 2, 00:07:54.709 "num_base_bdevs_discovered": 2, 00:07:54.709 "num_base_bdevs_operational": 2, 00:07:54.709 "base_bdevs_list": [ 00:07:54.709 { 00:07:54.709 "name": "BaseBdev1", 00:07:54.709 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:54.709 "is_configured": true, 00:07:54.709 "data_offset": 2048, 00:07:54.709 "data_size": 63488 00:07:54.709 }, 00:07:54.709 { 00:07:54.709 "name": "BaseBdev2", 00:07:54.709 "uuid": "2a026285-4a2e-11ef-9c8e-7947904e2597", 00:07:54.709 "is_configured": true, 00:07:54.709 "data_offset": 2048, 00:07:54.709 "data_size": 63488 00:07:54.709 } 00:07:54.709 ] 00:07:54.709 } 00:07:54.709 } 00:07:54.709 }' 00:07:54.709 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.709 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:54.709 BaseBdev2' 00:07:54.709 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:54.709 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:54.709 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:54.969 "name": "BaseBdev1", 00:07:54.969 "aliases": [ 00:07:54.969 "28f50f89-4a2e-11ef-9c8e-7947904e2597" 00:07:54.969 ], 00:07:54.969 "product_name": "Malloc disk", 00:07:54.969 "block_size": 512, 00:07:54.969 "num_blocks": 65536, 00:07:54.969 "uuid": "28f50f89-4a2e-11ef-9c8e-7947904e2597", 00:07:54.969 "assigned_rate_limits": { 00:07:54.969 "rw_ios_per_sec": 0, 00:07:54.969 "rw_mbytes_per_sec": 0, 00:07:54.969 "r_mbytes_per_sec": 0, 00:07:54.969 "w_mbytes_per_sec": 0 00:07:54.969 }, 00:07:54.969 "claimed": true, 00:07:54.969 "claim_type": "exclusive_write", 00:07:54.969 "zoned": false, 00:07:54.969 "supported_io_types": { 00:07:54.969 "read": true, 00:07:54.969 "write": true, 00:07:54.969 "unmap": true, 00:07:54.969 "flush": true, 00:07:54.969 "reset": true, 00:07:54.969 "nvme_admin": false, 00:07:54.969 "nvme_io": false, 00:07:54.969 "nvme_io_md": false, 00:07:54.969 "write_zeroes": true, 00:07:54.969 "zcopy": true, 00:07:54.969 "get_zone_info": false, 00:07:54.969 "zone_management": false, 00:07:54.969 "zone_append": false, 00:07:54.969 "compare": false, 00:07:54.969 "compare_and_write": false, 00:07:54.969 "abort": true, 00:07:54.969 "seek_hole": false, 00:07:54.969 "seek_data": false, 00:07:54.969 "copy": true, 00:07:54.969 "nvme_iov_md": false 00:07:54.969 }, 00:07:54.969 "memory_domains": [ 00:07:54.969 { 00:07:54.969 "dma_device_id": "system", 00:07:54.969 "dma_device_type": 1 00:07:54.969 }, 00:07:54.969 { 00:07:54.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.969 "dma_device_type": 2 00:07:54.969 } 00:07:54.969 ], 00:07:54.969 "driver_specific": {} 00:07:54.969 }' 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:54.969 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:54.970 02:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:55.230 "name": "BaseBdev2", 00:07:55.230 "aliases": [ 00:07:55.230 "2a026285-4a2e-11ef-9c8e-7947904e2597" 00:07:55.230 ], 00:07:55.230 "product_name": "Malloc disk", 00:07:55.230 "block_size": 512, 00:07:55.230 "num_blocks": 65536, 00:07:55.230 "uuid": "2a026285-4a2e-11ef-9c8e-7947904e2597", 00:07:55.230 "assigned_rate_limits": { 00:07:55.230 "rw_ios_per_sec": 0, 00:07:55.230 "rw_mbytes_per_sec": 0, 00:07:55.230 "r_mbytes_per_sec": 0, 00:07:55.230 "w_mbytes_per_sec": 0 00:07:55.230 }, 00:07:55.230 "claimed": true, 00:07:55.230 "claim_type": "exclusive_write", 00:07:55.230 "zoned": false, 00:07:55.230 "supported_io_types": { 00:07:55.230 "read": true, 00:07:55.230 "write": true, 00:07:55.230 "unmap": true, 00:07:55.230 "flush": true, 00:07:55.230 "reset": true, 00:07:55.230 "nvme_admin": false, 00:07:55.230 "nvme_io": false, 00:07:55.230 "nvme_io_md": false, 00:07:55.230 "write_zeroes": true, 00:07:55.230 "zcopy": true, 00:07:55.230 "get_zone_info": false, 00:07:55.230 "zone_management": false, 00:07:55.230 "zone_append": false, 00:07:55.230 "compare": false, 00:07:55.230 "compare_and_write": false, 00:07:55.230 "abort": true, 00:07:55.230 "seek_hole": false, 00:07:55.230 "seek_data": false, 00:07:55.230 "copy": true, 00:07:55.230 "nvme_iov_md": false 00:07:55.230 }, 00:07:55.230 "memory_domains": [ 00:07:55.230 { 00:07:55.230 "dma_device_id": "system", 00:07:55.230 "dma_device_type": 1 00:07:55.230 }, 00:07:55.230 { 00:07:55.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.230 "dma_device_type": 2 00:07:55.230 } 00:07:55.230 ], 00:07:55.230 "driver_specific": {} 00:07:55.230 }' 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:55.230 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:55.490 [2024-07-25 02:32:42.281511] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.490 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.750 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:55.750 "name": "Existed_Raid", 00:07:55.750 "uuid": "29a1e43e-4a2e-11ef-9c8e-7947904e2597", 00:07:55.750 "strip_size_kb": 0, 00:07:55.750 "state": "online", 00:07:55.750 "raid_level": "raid1", 00:07:55.750 "superblock": true, 00:07:55.750 "num_base_bdevs": 2, 00:07:55.750 "num_base_bdevs_discovered": 1, 00:07:55.750 "num_base_bdevs_operational": 1, 00:07:55.750 "base_bdevs_list": [ 00:07:55.750 { 00:07:55.750 "name": null, 00:07:55.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.750 "is_configured": false, 00:07:55.750 "data_offset": 2048, 00:07:55.750 "data_size": 63488 00:07:55.750 }, 00:07:55.750 { 00:07:55.750 "name": "BaseBdev2", 00:07:55.750 "uuid": "2a026285-4a2e-11ef-9c8e-7947904e2597", 00:07:55.750 "is_configured": true, 00:07:55.750 "data_offset": 2048, 00:07:55.750 "data_size": 63488 00:07:55.750 } 00:07:55.750 ] 00:07:55.750 }' 00:07:55.750 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:55.750 02:32:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.010 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:56.010 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:56.010 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.010 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:56.270 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:56.270 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.270 02:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:56.270 [2024-07-25 02:32:43.094550] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.270 [2024-07-25 02:32:43.094572] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.270 [2024-07-25 02:32:43.099256] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.270 [2024-07-25 02:32:43.099267] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.271 [2024-07-25 02:32:43.099270] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34b4ef234a00 name Existed_Raid, state offline 00:07:56.271 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:56.271 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:56.271 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.271 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 50981 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 50981 ']' 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 50981 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 50981 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:56.531 killing process with pid 50981 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50981' 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 50981 00:07:56.531 [2024-07-25 02:32:43.308571] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.531 [2024-07-25 02:32:43.308602] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.531 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 50981 00:07:56.821 02:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:56.821 00:07:56.821 real 0m6.757s 00:07:56.821 user 0m11.448s 00:07:56.821 sys 0m1.356s 00:07:56.821 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.821 02:32:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.821 ************************************ 00:07:56.821 END TEST raid_state_function_test_sb 00:07:56.821 ************************************ 00:07:56.821 02:32:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:56.821 02:32:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:56.821 02:32:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.821 02:32:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.821 02:32:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.821 ************************************ 00:07:56.821 START TEST raid_superblock_test 00:07:56.821 ************************************ 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51247 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51247 /var/tmp/spdk-raid.sock 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51247 ']' 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.821 02:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.821 [2024-07-25 02:32:43.548942] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:07:56.821 [2024-07-25 02:32:43.549283] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:57.101 EAL: TSC is not safe to use in SMP mode 00:07:57.101 EAL: TSC is not invariant 00:07:57.101 [2024-07-25 02:32:43.972753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.361 [2024-07-25 02:32:44.063888] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:57.361 [2024-07-25 02:32:44.065555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.361 [2024-07-25 02:32:44.066141] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.361 [2024-07-25 02:32:44.066153] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:57.621 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:57.881 malloc1 00:07:57.881 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.141 [2024-07-25 02:32:44.777385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.142 [2024-07-25 02:32:44.777438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.142 [2024-07-25 02:32:44.777446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234780 00:07:58.142 [2024-07-25 02:32:44.777451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.142 [2024-07-25 02:32:44.778073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.142 [2024-07-25 02:32:44.778098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.142 pt1 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:58.142 malloc2 00:07:58.142 02:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.402 [2024-07-25 02:32:45.121568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.402 [2024-07-25 02:32:45.121606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.402 [2024-07-25 02:32:45.121613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234c80 00:07:58.402 [2024-07-25 02:32:45.121619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.402 [2024-07-25 02:32:45.122061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.402 [2024-07-25 02:32:45.122088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.402 pt2 00:07:58.402 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:58.402 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:58.402 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:58.662 [2024-07-25 02:32:45.297661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.662 [2024-07-25 02:32:45.298083] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.662 [2024-07-25 02:32:45.298133] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x6ec3a234f00 00:07:58.662 [2024-07-25 02:32:45.298139] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.662 [2024-07-25 02:32:45.298168] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6ec3a297e20 00:07:58.662 [2024-07-25 02:32:45.298220] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x6ec3a234f00 00:07:58.662 [2024-07-25 02:32:45.298223] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x6ec3a234f00 00:07:58.662 [2024-07-25 02:32:45.298241] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.662 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:58.662 "name": "raid_bdev1", 00:07:58.662 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:07:58.662 "strip_size_kb": 0, 00:07:58.662 "state": "online", 00:07:58.662 "raid_level": "raid1", 00:07:58.662 "superblock": true, 00:07:58.662 "num_base_bdevs": 2, 00:07:58.663 "num_base_bdevs_discovered": 2, 00:07:58.663 "num_base_bdevs_operational": 2, 00:07:58.663 "base_bdevs_list": [ 00:07:58.663 { 00:07:58.663 "name": "pt1", 00:07:58.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.663 "is_configured": true, 00:07:58.663 "data_offset": 2048, 00:07:58.663 "data_size": 63488 00:07:58.663 }, 00:07:58.663 { 00:07:58.663 "name": "pt2", 00:07:58.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.663 "is_configured": true, 00:07:58.663 "data_offset": 2048, 00:07:58.663 "data_size": 63488 00:07:58.663 } 00:07:58.663 ] 00:07:58.663 }' 00:07:58.663 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:58.663 02:32:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:58.923 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:59.183 [2024-07-25 02:32:45.942004] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:59.183 "name": "raid_bdev1", 00:07:59.183 "aliases": [ 00:07:59.183 "2cd66fa6-4a2e-11ef-9c8e-7947904e2597" 00:07:59.183 ], 00:07:59.183 "product_name": "Raid Volume", 00:07:59.183 "block_size": 512, 00:07:59.183 "num_blocks": 63488, 00:07:59.183 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:07:59.183 "assigned_rate_limits": { 00:07:59.183 "rw_ios_per_sec": 0, 00:07:59.183 "rw_mbytes_per_sec": 0, 00:07:59.183 "r_mbytes_per_sec": 0, 00:07:59.183 "w_mbytes_per_sec": 0 00:07:59.183 }, 00:07:59.183 "claimed": false, 00:07:59.183 "zoned": false, 00:07:59.183 "supported_io_types": { 00:07:59.183 "read": true, 00:07:59.183 "write": true, 00:07:59.183 "unmap": false, 00:07:59.183 "flush": false, 00:07:59.183 "reset": true, 00:07:59.183 "nvme_admin": false, 00:07:59.183 "nvme_io": false, 00:07:59.183 "nvme_io_md": false, 00:07:59.183 "write_zeroes": true, 00:07:59.183 "zcopy": false, 00:07:59.183 "get_zone_info": false, 00:07:59.183 "zone_management": false, 00:07:59.183 "zone_append": false, 00:07:59.183 "compare": false, 00:07:59.183 "compare_and_write": false, 00:07:59.183 "abort": false, 00:07:59.183 "seek_hole": false, 00:07:59.183 "seek_data": false, 00:07:59.183 "copy": false, 00:07:59.183 "nvme_iov_md": false 00:07:59.183 }, 00:07:59.183 "memory_domains": [ 00:07:59.183 { 00:07:59.183 "dma_device_id": "system", 00:07:59.183 "dma_device_type": 1 00:07:59.183 }, 00:07:59.183 { 00:07:59.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.183 "dma_device_type": 2 00:07:59.183 }, 00:07:59.183 { 00:07:59.183 "dma_device_id": "system", 00:07:59.183 "dma_device_type": 1 00:07:59.183 }, 00:07:59.183 { 00:07:59.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.183 "dma_device_type": 2 00:07:59.183 } 00:07:59.183 ], 00:07:59.183 "driver_specific": { 00:07:59.183 "raid": { 00:07:59.183 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:07:59.183 "strip_size_kb": 0, 00:07:59.183 "state": "online", 00:07:59.183 "raid_level": "raid1", 00:07:59.183 "superblock": true, 00:07:59.183 "num_base_bdevs": 2, 00:07:59.183 "num_base_bdevs_discovered": 2, 00:07:59.183 "num_base_bdevs_operational": 2, 00:07:59.183 "base_bdevs_list": [ 00:07:59.183 { 00:07:59.183 "name": "pt1", 00:07:59.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.183 "is_configured": true, 00:07:59.183 "data_offset": 2048, 00:07:59.183 "data_size": 63488 00:07:59.183 }, 00:07:59.183 { 00:07:59.183 "name": "pt2", 00:07:59.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.183 "is_configured": true, 00:07:59.183 "data_offset": 2048, 00:07:59.183 "data_size": 63488 00:07:59.183 } 00:07:59.183 ] 00:07:59.183 } 00:07:59.183 } 00:07:59.183 }' 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:59.183 pt2' 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:59.183 02:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:59.444 "name": "pt1", 00:07:59.444 "aliases": [ 00:07:59.444 "00000000-0000-0000-0000-000000000001" 00:07:59.444 ], 00:07:59.444 "product_name": "passthru", 00:07:59.444 "block_size": 512, 00:07:59.444 "num_blocks": 65536, 00:07:59.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.444 "assigned_rate_limits": { 00:07:59.444 "rw_ios_per_sec": 0, 00:07:59.444 "rw_mbytes_per_sec": 0, 00:07:59.444 "r_mbytes_per_sec": 0, 00:07:59.444 "w_mbytes_per_sec": 0 00:07:59.444 }, 00:07:59.444 "claimed": true, 00:07:59.444 "claim_type": "exclusive_write", 00:07:59.444 "zoned": false, 00:07:59.444 "supported_io_types": { 00:07:59.444 "read": true, 00:07:59.444 "write": true, 00:07:59.444 "unmap": true, 00:07:59.444 "flush": true, 00:07:59.444 "reset": true, 00:07:59.444 "nvme_admin": false, 00:07:59.444 "nvme_io": false, 00:07:59.444 "nvme_io_md": false, 00:07:59.444 "write_zeroes": true, 00:07:59.444 "zcopy": true, 00:07:59.444 "get_zone_info": false, 00:07:59.444 "zone_management": false, 00:07:59.444 "zone_append": false, 00:07:59.444 "compare": false, 00:07:59.444 "compare_and_write": false, 00:07:59.444 "abort": true, 00:07:59.444 "seek_hole": false, 00:07:59.444 "seek_data": false, 00:07:59.444 "copy": true, 00:07:59.444 "nvme_iov_md": false 00:07:59.444 }, 00:07:59.444 "memory_domains": [ 00:07:59.444 { 00:07:59.444 "dma_device_id": "system", 00:07:59.444 "dma_device_type": 1 00:07:59.444 }, 00:07:59.444 { 00:07:59.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.444 "dma_device_type": 2 00:07:59.444 } 00:07:59.444 ], 00:07:59.444 "driver_specific": { 00:07:59.444 "passthru": { 00:07:59.444 "name": "pt1", 00:07:59.444 "base_bdev_name": "malloc1" 00:07:59.444 } 00:07:59.444 } 00:07:59.444 }' 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:59.444 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:59.704 "name": "pt2", 00:07:59.704 "aliases": [ 00:07:59.704 "00000000-0000-0000-0000-000000000002" 00:07:59.704 ], 00:07:59.704 "product_name": "passthru", 00:07:59.704 "block_size": 512, 00:07:59.704 "num_blocks": 65536, 00:07:59.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.704 "assigned_rate_limits": { 00:07:59.704 "rw_ios_per_sec": 0, 00:07:59.704 "rw_mbytes_per_sec": 0, 00:07:59.704 "r_mbytes_per_sec": 0, 00:07:59.704 "w_mbytes_per_sec": 0 00:07:59.704 }, 00:07:59.704 "claimed": true, 00:07:59.704 "claim_type": "exclusive_write", 00:07:59.704 "zoned": false, 00:07:59.704 "supported_io_types": { 00:07:59.704 "read": true, 00:07:59.704 "write": true, 00:07:59.704 "unmap": true, 00:07:59.704 "flush": true, 00:07:59.704 "reset": true, 00:07:59.704 "nvme_admin": false, 00:07:59.704 "nvme_io": false, 00:07:59.704 "nvme_io_md": false, 00:07:59.704 "write_zeroes": true, 00:07:59.704 "zcopy": true, 00:07:59.704 "get_zone_info": false, 00:07:59.704 "zone_management": false, 00:07:59.704 "zone_append": false, 00:07:59.704 "compare": false, 00:07:59.704 "compare_and_write": false, 00:07:59.704 "abort": true, 00:07:59.704 "seek_hole": false, 00:07:59.704 "seek_data": false, 00:07:59.704 "copy": true, 00:07:59.704 "nvme_iov_md": false 00:07:59.704 }, 00:07:59.704 "memory_domains": [ 00:07:59.704 { 00:07:59.704 "dma_device_id": "system", 00:07:59.704 "dma_device_type": 1 00:07:59.704 }, 00:07:59.704 { 00:07:59.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.704 "dma_device_type": 2 00:07:59.704 } 00:07:59.704 ], 00:07:59.704 "driver_specific": { 00:07:59.704 "passthru": { 00:07:59.704 "name": "pt2", 00:07:59.704 "base_bdev_name": "malloc2" 00:07:59.704 } 00:07:59.704 } 00:07:59.704 }' 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:59.704 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:59.964 [2024-07-25 02:32:46.694390] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.964 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2cd66fa6-4a2e-11ef-9c8e-7947904e2597 00:07:59.964 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2cd66fa6-4a2e-11ef-9c8e-7947904e2597 ']' 00:07:59.964 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:00.224 [2024-07-25 02:32:46.878458] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.224 [2024-07-25 02:32:46.878473] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.224 [2024-07-25 02:32:46.878486] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.224 [2024-07-25 02:32:46.878512] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.224 [2024-07-25 02:32:46.878516] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a234f00 name raid_bdev1, state offline 00:08:00.224 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.224 02:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:00.224 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:00.224 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:00.224 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:00.224 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:00.485 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:00.485 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.745 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.746 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:01.006 [2024-07-25 02:32:47.782911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:01.006 [2024-07-25 02:32:47.783363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:01.006 [2024-07-25 02:32:47.783385] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:01.006 [2024-07-25 02:32:47.783409] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:01.006 [2024-07-25 02:32:47.783421] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.006 [2024-07-25 02:32:47.783425] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a234c80 name raid_bdev1, state configuring 00:08:01.006 request: 00:08:01.006 { 00:08:01.006 "name": "raid_bdev1", 00:08:01.006 "raid_level": "raid1", 00:08:01.006 "base_bdevs": [ 00:08:01.006 "malloc1", 00:08:01.006 "malloc2" 00:08:01.006 ], 00:08:01.006 "superblock": false, 00:08:01.006 "method": "bdev_raid_create", 00:08:01.006 "req_id": 1 00:08:01.006 } 00:08:01.006 Got JSON-RPC error response 00:08:01.006 response: 00:08:01.006 { 00:08:01.006 "code": -17, 00:08:01.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:01.006 } 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.006 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:01.267 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:01.267 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:01.267 02:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.267 [2024-07-25 02:32:48.127076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.267 [2024-07-25 02:32:48.127112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.267 [2024-07-25 02:32:48.127136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234780 00:08:01.267 [2024-07-25 02:32:48.127141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.267 [2024-07-25 02:32:48.127624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.267 [2024-07-25 02:32:48.127648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.267 [2024-07-25 02:32:48.127665] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:01.267 [2024-07-25 02:32:48.127674] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.267 pt1 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.267 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.526 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:01.526 "name": "raid_bdev1", 00:08:01.526 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:01.526 "strip_size_kb": 0, 00:08:01.526 "state": "configuring", 00:08:01.526 "raid_level": "raid1", 00:08:01.526 "superblock": true, 00:08:01.526 "num_base_bdevs": 2, 00:08:01.526 "num_base_bdevs_discovered": 1, 00:08:01.526 "num_base_bdevs_operational": 2, 00:08:01.526 "base_bdevs_list": [ 00:08:01.526 { 00:08:01.526 "name": "pt1", 00:08:01.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.526 "is_configured": true, 00:08:01.526 "data_offset": 2048, 00:08:01.526 "data_size": 63488 00:08:01.526 }, 00:08:01.526 { 00:08:01.526 "name": null, 00:08:01.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.526 "is_configured": false, 00:08:01.526 "data_offset": 2048, 00:08:01.526 "data_size": 63488 00:08:01.526 } 00:08:01.526 ] 00:08:01.526 }' 00:08:01.526 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:01.526 02:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.786 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:01.786 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:01.786 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:01.786 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.047 [2024-07-25 02:32:48.763378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.047 [2024-07-25 02:32:48.763414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.047 [2024-07-25 02:32:48.763422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234f00 00:08:02.047 [2024-07-25 02:32:48.763427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.047 [2024-07-25 02:32:48.763519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.047 [2024-07-25 02:32:48.763525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.047 [2024-07-25 02:32:48.763540] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:02.047 [2024-07-25 02:32:48.763546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.047 [2024-07-25 02:32:48.763565] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x6ec3a235180 00:08:02.047 [2024-07-25 02:32:48.763568] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.047 [2024-07-25 02:32:48.763583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6ec3a297e20 00:08:02.047 [2024-07-25 02:32:48.763617] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x6ec3a235180 00:08:02.047 [2024-07-25 02:32:48.763619] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x6ec3a235180 00:08:02.047 [2024-07-25 02:32:48.763635] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.047 pt2 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.047 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.308 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.308 "name": "raid_bdev1", 00:08:02.308 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:02.308 "strip_size_kb": 0, 00:08:02.308 "state": "online", 00:08:02.308 "raid_level": "raid1", 00:08:02.308 "superblock": true, 00:08:02.308 "num_base_bdevs": 2, 00:08:02.308 "num_base_bdevs_discovered": 2, 00:08:02.308 "num_base_bdevs_operational": 2, 00:08:02.308 "base_bdevs_list": [ 00:08:02.308 { 00:08:02.308 "name": "pt1", 00:08:02.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.308 "is_configured": true, 00:08:02.308 "data_offset": 2048, 00:08:02.308 "data_size": 63488 00:08:02.308 }, 00:08:02.308 { 00:08:02.308 "name": "pt2", 00:08:02.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.308 "is_configured": true, 00:08:02.308 "data_offset": 2048, 00:08:02.308 "data_size": 63488 00:08:02.308 } 00:08:02.308 ] 00:08:02.308 }' 00:08:02.308 02:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.308 02:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:02.569 [2024-07-25 02:32:49.399696] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:02.569 "name": "raid_bdev1", 00:08:02.569 "aliases": [ 00:08:02.569 "2cd66fa6-4a2e-11ef-9c8e-7947904e2597" 00:08:02.569 ], 00:08:02.569 "product_name": "Raid Volume", 00:08:02.569 "block_size": 512, 00:08:02.569 "num_blocks": 63488, 00:08:02.569 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:02.569 "assigned_rate_limits": { 00:08:02.569 "rw_ios_per_sec": 0, 00:08:02.569 "rw_mbytes_per_sec": 0, 00:08:02.569 "r_mbytes_per_sec": 0, 00:08:02.569 "w_mbytes_per_sec": 0 00:08:02.569 }, 00:08:02.569 "claimed": false, 00:08:02.569 "zoned": false, 00:08:02.569 "supported_io_types": { 00:08:02.569 "read": true, 00:08:02.569 "write": true, 00:08:02.569 "unmap": false, 00:08:02.569 "flush": false, 00:08:02.569 "reset": true, 00:08:02.569 "nvme_admin": false, 00:08:02.569 "nvme_io": false, 00:08:02.569 "nvme_io_md": false, 00:08:02.569 "write_zeroes": true, 00:08:02.569 "zcopy": false, 00:08:02.569 "get_zone_info": false, 00:08:02.569 "zone_management": false, 00:08:02.569 "zone_append": false, 00:08:02.569 "compare": false, 00:08:02.569 "compare_and_write": false, 00:08:02.569 "abort": false, 00:08:02.569 "seek_hole": false, 00:08:02.569 "seek_data": false, 00:08:02.569 "copy": false, 00:08:02.569 "nvme_iov_md": false 00:08:02.569 }, 00:08:02.569 "memory_domains": [ 00:08:02.569 { 00:08:02.569 "dma_device_id": "system", 00:08:02.569 "dma_device_type": 1 00:08:02.569 }, 00:08:02.569 { 00:08:02.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.569 "dma_device_type": 2 00:08:02.569 }, 00:08:02.569 { 00:08:02.569 "dma_device_id": "system", 00:08:02.569 "dma_device_type": 1 00:08:02.569 }, 00:08:02.569 { 00:08:02.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.569 "dma_device_type": 2 00:08:02.569 } 00:08:02.569 ], 00:08:02.569 "driver_specific": { 00:08:02.569 "raid": { 00:08:02.569 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:02.569 "strip_size_kb": 0, 00:08:02.569 "state": "online", 00:08:02.569 "raid_level": "raid1", 00:08:02.569 "superblock": true, 00:08:02.569 "num_base_bdevs": 2, 00:08:02.569 "num_base_bdevs_discovered": 2, 00:08:02.569 "num_base_bdevs_operational": 2, 00:08:02.569 "base_bdevs_list": [ 00:08:02.569 { 00:08:02.569 "name": "pt1", 00:08:02.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.569 "is_configured": true, 00:08:02.569 "data_offset": 2048, 00:08:02.569 "data_size": 63488 00:08:02.569 }, 00:08:02.569 { 00:08:02.569 "name": "pt2", 00:08:02.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.569 "is_configured": true, 00:08:02.569 "data_offset": 2048, 00:08:02.569 "data_size": 63488 00:08:02.569 } 00:08:02.569 ] 00:08:02.569 } 00:08:02.569 } 00:08:02.569 }' 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:02.569 pt2' 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:02.569 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:02.830 "name": "pt1", 00:08:02.830 "aliases": [ 00:08:02.830 "00000000-0000-0000-0000-000000000001" 00:08:02.830 ], 00:08:02.830 "product_name": "passthru", 00:08:02.830 "block_size": 512, 00:08:02.830 "num_blocks": 65536, 00:08:02.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.830 "assigned_rate_limits": { 00:08:02.830 "rw_ios_per_sec": 0, 00:08:02.830 "rw_mbytes_per_sec": 0, 00:08:02.830 "r_mbytes_per_sec": 0, 00:08:02.830 "w_mbytes_per_sec": 0 00:08:02.830 }, 00:08:02.830 "claimed": true, 00:08:02.830 "claim_type": "exclusive_write", 00:08:02.830 "zoned": false, 00:08:02.830 "supported_io_types": { 00:08:02.830 "read": true, 00:08:02.830 "write": true, 00:08:02.830 "unmap": true, 00:08:02.830 "flush": true, 00:08:02.830 "reset": true, 00:08:02.830 "nvme_admin": false, 00:08:02.830 "nvme_io": false, 00:08:02.830 "nvme_io_md": false, 00:08:02.830 "write_zeroes": true, 00:08:02.830 "zcopy": true, 00:08:02.830 "get_zone_info": false, 00:08:02.830 "zone_management": false, 00:08:02.830 "zone_append": false, 00:08:02.830 "compare": false, 00:08:02.830 "compare_and_write": false, 00:08:02.830 "abort": true, 00:08:02.830 "seek_hole": false, 00:08:02.830 "seek_data": false, 00:08:02.830 "copy": true, 00:08:02.830 "nvme_iov_md": false 00:08:02.830 }, 00:08:02.830 "memory_domains": [ 00:08:02.830 { 00:08:02.830 "dma_device_id": "system", 00:08:02.830 "dma_device_type": 1 00:08:02.830 }, 00:08:02.830 { 00:08:02.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.830 "dma_device_type": 2 00:08:02.830 } 00:08:02.830 ], 00:08:02.830 "driver_specific": { 00:08:02.830 "passthru": { 00:08:02.830 "name": "pt1", 00:08:02.830 "base_bdev_name": "malloc1" 00:08:02.830 } 00:08:02.830 } 00:08:02.830 }' 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:02.830 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:03.091 "name": "pt2", 00:08:03.091 "aliases": [ 00:08:03.091 "00000000-0000-0000-0000-000000000002" 00:08:03.091 ], 00:08:03.091 "product_name": "passthru", 00:08:03.091 "block_size": 512, 00:08:03.091 "num_blocks": 65536, 00:08:03.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.091 "assigned_rate_limits": { 00:08:03.091 "rw_ios_per_sec": 0, 00:08:03.091 "rw_mbytes_per_sec": 0, 00:08:03.091 "r_mbytes_per_sec": 0, 00:08:03.091 "w_mbytes_per_sec": 0 00:08:03.091 }, 00:08:03.091 "claimed": true, 00:08:03.091 "claim_type": "exclusive_write", 00:08:03.091 "zoned": false, 00:08:03.091 "supported_io_types": { 00:08:03.091 "read": true, 00:08:03.091 "write": true, 00:08:03.091 "unmap": true, 00:08:03.091 "flush": true, 00:08:03.091 "reset": true, 00:08:03.091 "nvme_admin": false, 00:08:03.091 "nvme_io": false, 00:08:03.091 "nvme_io_md": false, 00:08:03.091 "write_zeroes": true, 00:08:03.091 "zcopy": true, 00:08:03.091 "get_zone_info": false, 00:08:03.091 "zone_management": false, 00:08:03.091 "zone_append": false, 00:08:03.091 "compare": false, 00:08:03.091 "compare_and_write": false, 00:08:03.091 "abort": true, 00:08:03.091 "seek_hole": false, 00:08:03.091 "seek_data": false, 00:08:03.091 "copy": true, 00:08:03.091 "nvme_iov_md": false 00:08:03.091 }, 00:08:03.091 "memory_domains": [ 00:08:03.091 { 00:08:03.091 "dma_device_id": "system", 00:08:03.091 "dma_device_type": 1 00:08:03.091 }, 00:08:03.091 { 00:08:03.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.091 "dma_device_type": 2 00:08:03.091 } 00:08:03.091 ], 00:08:03.091 "driver_specific": { 00:08:03.091 "passthru": { 00:08:03.091 "name": "pt2", 00:08:03.091 "base_bdev_name": "malloc2" 00:08:03.091 } 00:08:03.091 } 00:08:03.091 }' 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:03.091 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:03.351 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:03.351 02:32:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:03.351 [2024-07-25 02:32:50.148039] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.351 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2cd66fa6-4a2e-11ef-9c8e-7947904e2597 '!=' 2cd66fa6-4a2e-11ef-9c8e-7947904e2597 ']' 00:08:03.351 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:03.351 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:03.351 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:03.351 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:03.612 [2024-07-25 02:32:50.332113] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.612 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.872 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:03.872 "name": "raid_bdev1", 00:08:03.872 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:03.872 "strip_size_kb": 0, 00:08:03.872 "state": "online", 00:08:03.872 "raid_level": "raid1", 00:08:03.872 "superblock": true, 00:08:03.872 "num_base_bdevs": 2, 00:08:03.872 "num_base_bdevs_discovered": 1, 00:08:03.872 "num_base_bdevs_operational": 1, 00:08:03.872 "base_bdevs_list": [ 00:08:03.872 { 00:08:03.872 "name": null, 00:08:03.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.872 "is_configured": false, 00:08:03.872 "data_offset": 2048, 00:08:03.872 "data_size": 63488 00:08:03.872 }, 00:08:03.872 { 00:08:03.872 "name": "pt2", 00:08:03.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.872 "is_configured": true, 00:08:03.872 "data_offset": 2048, 00:08:03.872 "data_size": 63488 00:08:03.872 } 00:08:03.872 ] 00:08:03.872 }' 00:08:03.872 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:03.872 02:32:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.145 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:04.145 [2024-07-25 02:32:50.964391] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.145 [2024-07-25 02:32:50.964418] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.145 [2024-07-25 02:32:50.964430] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.145 [2024-07-25 02:32:50.964454] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.145 [2024-07-25 02:32:50.964458] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a235180 name raid_bdev1, state offline 00:08:04.145 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.145 02:32:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:04.405 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:04.405 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:04.405 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:04.405 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:04.405 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:04.666 [2024-07-25 02:32:51.508635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:04.666 [2024-07-25 02:32:51.508669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.666 [2024-07-25 02:32:51.508675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234f00 00:08:04.666 [2024-07-25 02:32:51.508680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.666 [2024-07-25 02:32:51.509201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.666 [2024-07-25 02:32:51.509227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:04.666 [2024-07-25 02:32:51.509245] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:04.666 [2024-07-25 02:32:51.509254] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:04.666 [2024-07-25 02:32:51.509272] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x6ec3a235180 00:08:04.666 [2024-07-25 02:32:51.509275] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.666 [2024-07-25 02:32:51.509292] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6ec3a297e20 00:08:04.666 [2024-07-25 02:32:51.509322] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x6ec3a235180 00:08:04.666 [2024-07-25 02:32:51.509325] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x6ec3a235180 00:08:04.666 [2024-07-25 02:32:51.509341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.666 pt2 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.666 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.926 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:04.926 "name": "raid_bdev1", 00:08:04.926 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:04.926 "strip_size_kb": 0, 00:08:04.926 "state": "online", 00:08:04.926 "raid_level": "raid1", 00:08:04.926 "superblock": true, 00:08:04.926 "num_base_bdevs": 2, 00:08:04.926 "num_base_bdevs_discovered": 1, 00:08:04.926 "num_base_bdevs_operational": 1, 00:08:04.926 "base_bdevs_list": [ 00:08:04.926 { 00:08:04.926 "name": null, 00:08:04.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.926 "is_configured": false, 00:08:04.926 "data_offset": 2048, 00:08:04.926 "data_size": 63488 00:08:04.926 }, 00:08:04.926 { 00:08:04.926 "name": "pt2", 00:08:04.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.926 "is_configured": true, 00:08:04.926 "data_offset": 2048, 00:08:04.926 "data_size": 63488 00:08:04.926 } 00:08:04.926 ] 00:08:04.926 }' 00:08:04.926 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:04.926 02:32:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.186 02:32:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:05.445 [2024-07-25 02:32:52.120900] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.445 [2024-07-25 02:32:52.120918] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.445 [2024-07-25 02:32:52.120935] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.445 [2024-07-25 02:32:52.120960] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.445 [2024-07-25 02:32:52.120964] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a235180 name raid_bdev1, state offline 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:08:05.445 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.714 [2024-07-25 02:32:52.497068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.714 [2024-07-25 02:32:52.497105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.714 [2024-07-25 02:32:52.497111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ec3a234c80 00:08:05.714 [2024-07-25 02:32:52.497116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.714 [2024-07-25 02:32:52.497617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.714 [2024-07-25 02:32:52.497640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.714 [2024-07-25 02:32:52.497658] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:05.714 [2024-07-25 02:32:52.497667] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.714 [2024-07-25 02:32:52.497687] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:05.714 [2024-07-25 02:32:52.497694] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:05.714 [2024-07-25 02:32:52.497698] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a234780 name raid_bdev1, state configuring 00:08:05.714 [2024-07-25 02:32:52.497703] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.714 [2024-07-25 02:32:52.497713] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x6ec3a234780 00:08:05.714 [2024-07-25 02:32:52.497716] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.714 [2024-07-25 02:32:52.497733] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x6ec3a297e20 00:08:05.714 [2024-07-25 02:32:52.497761] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x6ec3a234780 00:08:05.714 [2024-07-25 02:32:52.497768] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x6ec3a234780 00:08:05.714 [2024-07-25 02:32:52.497782] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.714 pt1 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.714 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.989 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.989 "name": "raid_bdev1", 00:08:05.989 "uuid": "2cd66fa6-4a2e-11ef-9c8e-7947904e2597", 00:08:05.989 "strip_size_kb": 0, 00:08:05.989 "state": "online", 00:08:05.989 "raid_level": "raid1", 00:08:05.989 "superblock": true, 00:08:05.989 "num_base_bdevs": 2, 00:08:05.989 "num_base_bdevs_discovered": 1, 00:08:05.989 "num_base_bdevs_operational": 1, 00:08:05.989 "base_bdevs_list": [ 00:08:05.989 { 00:08:05.989 "name": null, 00:08:05.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.989 "is_configured": false, 00:08:05.989 "data_offset": 2048, 00:08:05.989 "data_size": 63488 00:08:05.989 }, 00:08:05.989 { 00:08:05.989 "name": "pt2", 00:08:05.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.989 "is_configured": true, 00:08:05.989 "data_offset": 2048, 00:08:05.989 "data_size": 63488 00:08:05.989 } 00:08:05.989 ] 00:08:05.989 }' 00:08:05.989 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.989 02:32:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.248 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:06.248 02:32:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:08:06.508 [2024-07-25 02:32:53.325456] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2cd66fa6-4a2e-11ef-9c8e-7947904e2597 '!=' 2cd66fa6-4a2e-11ef-9c8e-7947904e2597 ']' 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51247 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51247 ']' 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51247 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51247 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:06.508 killing process with pid 51247 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51247' 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51247 00:08:06.508 [2024-07-25 02:32:53.355800] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.508 [2024-07-25 02:32:53.355816] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.508 [2024-07-25 02:32:53.355835] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.508 [2024-07-25 02:32:53.355839] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x6ec3a234780 name raid_bdev1, state offline 00:08:06.508 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51247 00:08:06.508 [2024-07-25 02:32:53.365099] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.767 02:32:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:06.767 00:08:06.767 real 0m9.993s 00:08:06.767 user 0m17.663s 00:08:06.767 sys 0m1.706s 00:08:06.767 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.767 02:32:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 ************************************ 00:08:06.767 END TEST raid_superblock_test 00:08:06.767 ************************************ 00:08:06.767 02:32:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:06.767 02:32:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:06.767 02:32:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:06.767 02:32:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.767 02:32:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 ************************************ 00:08:06.767 START TEST raid_read_error_test 00:08:06.767 ************************************ 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.RMzanyZJnD 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51624 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51624 /var/tmp/spdk-raid.sock 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51624 ']' 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.767 02:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.767 [2024-07-25 02:32:53.613371] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:08:06.767 [2024-07-25 02:32:53.613590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:07.336 EAL: TSC is not safe to use in SMP mode 00:08:07.336 EAL: TSC is not invariant 00:08:07.336 [2024-07-25 02:32:54.032807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.336 [2024-07-25 02:32:54.123174] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:07.336 [2024-07-25 02:32:54.124858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.336 [2024-07-25 02:32:54.125434] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.336 [2024-07-25 02:32:54.125445] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.904 02:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.904 02:32:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:07.904 02:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:07.904 02:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.904 BaseBdev1_malloc 00:08:07.904 02:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:08.164 true 00:08:08.164 02:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:08.164 [2024-07-25 02:32:55.020632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:08.164 [2024-07-25 02:32:55.020673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.164 [2024-07-25 02:32:55.020710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285e3d834780 00:08:08.164 [2024-07-25 02:32:55.020716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.164 [2024-07-25 02:32:55.021158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.164 [2024-07-25 02:32:55.021183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:08.164 BaseBdev1 00:08:08.164 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:08.164 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:08.423 BaseBdev2_malloc 00:08:08.423 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:08.682 true 00:08:08.682 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:08.682 [2024-07-25 02:32:55.568849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:08.682 [2024-07-25 02:32:55.568889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.682 [2024-07-25 02:32:55.568909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x285e3d834c80 00:08:08.682 [2024-07-25 02:32:55.568931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.682 [2024-07-25 02:32:55.569348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.682 [2024-07-25 02:32:55.569374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:08.682 BaseBdev2 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:08.942 [2024-07-25 02:32:55.728923] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.942 [2024-07-25 02:32:55.729341] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.942 [2024-07-25 02:32:55.729397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x285e3d834f00 00:08:08.942 [2024-07-25 02:32:55.729402] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.942 [2024-07-25 02:32:55.729429] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x285e3d8a0e20 00:08:08.942 [2024-07-25 02:32:55.729480] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x285e3d834f00 00:08:08.942 [2024-07-25 02:32:55.729483] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x285e3d834f00 00:08:08.942 [2024-07-25 02:32:55.729501] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.942 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.202 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:09.202 "name": "raid_bdev1", 00:08:09.202 "uuid": "330e1ed7-4a2e-11ef-9c8e-7947904e2597", 00:08:09.202 "strip_size_kb": 0, 00:08:09.202 "state": "online", 00:08:09.202 "raid_level": "raid1", 00:08:09.202 "superblock": true, 00:08:09.202 "num_base_bdevs": 2, 00:08:09.202 "num_base_bdevs_discovered": 2, 00:08:09.202 "num_base_bdevs_operational": 2, 00:08:09.202 "base_bdevs_list": [ 00:08:09.202 { 00:08:09.202 "name": "BaseBdev1", 00:08:09.202 "uuid": "3e2b0cbf-5086-e25c-8762-33cbc2891614", 00:08:09.202 "is_configured": true, 00:08:09.202 "data_offset": 2048, 00:08:09.202 "data_size": 63488 00:08:09.202 }, 00:08:09.202 { 00:08:09.202 "name": "BaseBdev2", 00:08:09.202 "uuid": "eedbe80f-39f6-f153-9a5c-5fd39d3b960c", 00:08:09.202 "is_configured": true, 00:08:09.202 "data_offset": 2048, 00:08:09.202 "data_size": 63488 00:08:09.202 } 00:08:09.202 ] 00:08:09.202 }' 00:08:09.202 02:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:09.202 02:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.461 02:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:09.461 02:32:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:09.461 [2024-07-25 02:32:56.281203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x285e3d8a0ec0 00:08:10.400 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.661 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.920 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.920 "name": "raid_bdev1", 00:08:10.920 "uuid": "330e1ed7-4a2e-11ef-9c8e-7947904e2597", 00:08:10.920 "strip_size_kb": 0, 00:08:10.920 "state": "online", 00:08:10.920 "raid_level": "raid1", 00:08:10.920 "superblock": true, 00:08:10.920 "num_base_bdevs": 2, 00:08:10.920 "num_base_bdevs_discovered": 2, 00:08:10.920 "num_base_bdevs_operational": 2, 00:08:10.920 "base_bdevs_list": [ 00:08:10.920 { 00:08:10.920 "name": "BaseBdev1", 00:08:10.920 "uuid": "3e2b0cbf-5086-e25c-8762-33cbc2891614", 00:08:10.920 "is_configured": true, 00:08:10.920 "data_offset": 2048, 00:08:10.920 "data_size": 63488 00:08:10.920 }, 00:08:10.920 { 00:08:10.920 "name": "BaseBdev2", 00:08:10.920 "uuid": "eedbe80f-39f6-f153-9a5c-5fd39d3b960c", 00:08:10.920 "is_configured": true, 00:08:10.920 "data_offset": 2048, 00:08:10.920 "data_size": 63488 00:08:10.920 } 00:08:10.921 ] 00:08:10.921 }' 00:08:10.921 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.921 02:32:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.180 02:32:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:11.180 [2024-07-25 02:32:58.067709] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.180 [2024-07-25 02:32:58.067736] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.180 [2024-07-25 02:32:58.068021] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.180 [2024-07-25 02:32:58.068028] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.180 [2024-07-25 02:32:58.068039] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.180 [2024-07-25 02:32:58.068043] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x285e3d834f00 name raid_bdev1, state offline 00:08:11.180 0 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51624 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51624 ']' 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51624 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51624 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:11.440 killing process with pid 51624 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51624' 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51624 00:08:11.440 [2024-07-25 02:32:58.096908] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51624 00:08:11.440 [2024-07-25 02:32:58.105942] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.RMzanyZJnD 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:11.440 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:11.441 00:08:11.441 real 0m4.689s 00:08:11.441 user 0m6.779s 00:08:11.441 sys 0m0.939s 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.441 02:32:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.441 ************************************ 00:08:11.441 END TEST raid_read_error_test 00:08:11.441 ************************************ 00:08:11.441 02:32:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:11.441 02:32:58 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:11.441 02:32:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:11.441 02:32:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.441 02:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 ************************************ 00:08:11.701 START TEST raid_write_error_test 00:08:11.701 ************************************ 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qyyKJgLbv6 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51748 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51748 /var/tmp/spdk-raid.sock 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51748 ']' 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.701 02:32:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.701 [2024-07-25 02:32:58.366626] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:08:11.701 [2024-07-25 02:32:58.366959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:11.961 EAL: TSC is not safe to use in SMP mode 00:08:11.961 EAL: TSC is not invariant 00:08:11.961 [2024-07-25 02:32:58.787218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.220 [2024-07-25 02:32:58.877864] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:12.220 [2024-07-25 02:32:58.879519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.220 [2024-07-25 02:32:58.880091] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.220 [2024-07-25 02:32:58.880102] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.480 02:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.480 02:32:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:12.480 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:12.480 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:12.739 BaseBdev1_malloc 00:08:12.740 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:12.740 true 00:08:12.999 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:12.999 [2024-07-25 02:32:59.787293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:12.999 [2024-07-25 02:32:59.787338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.999 [2024-07-25 02:32:59.787359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x353221c34780 00:08:12.999 [2024-07-25 02:32:59.787365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.999 [2024-07-25 02:32:59.787811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.999 [2024-07-25 02:32:59.787836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:12.999 BaseBdev1 00:08:12.999 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:12.999 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.258 BaseBdev2_malloc 00:08:13.258 02:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:13.516 true 00:08:13.516 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.516 [2024-07-25 02:33:00.331488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.516 [2024-07-25 02:33:00.331524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.516 [2024-07-25 02:33:00.331546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x353221c34c80 00:08:13.516 [2024-07-25 02:33:00.331552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.516 [2024-07-25 02:33:00.331993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.516 [2024-07-25 02:33:00.332019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.516 BaseBdev2 00:08:13.516 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:13.775 [2024-07-25 02:33:00.515555] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.775 [2024-07-25 02:33:00.515950] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.775 [2024-07-25 02:33:00.516007] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x353221c34f00 00:08:13.775 [2024-07-25 02:33:00.516012] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.775 [2024-07-25 02:33:00.516037] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x353221ca0e20 00:08:13.775 [2024-07-25 02:33:00.516088] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x353221c34f00 00:08:13.775 [2024-07-25 02:33:00.516091] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x353221c34f00 00:08:13.775 [2024-07-25 02:33:00.516108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.775 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.047 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.047 "name": "raid_bdev1", 00:08:14.047 "uuid": "35e880b3-4a2e-11ef-9c8e-7947904e2597", 00:08:14.047 "strip_size_kb": 0, 00:08:14.047 "state": "online", 00:08:14.047 "raid_level": "raid1", 00:08:14.047 "superblock": true, 00:08:14.047 "num_base_bdevs": 2, 00:08:14.047 "num_base_bdevs_discovered": 2, 00:08:14.047 "num_base_bdevs_operational": 2, 00:08:14.047 "base_bdevs_list": [ 00:08:14.047 { 00:08:14.047 "name": "BaseBdev1", 00:08:14.047 "uuid": "dc681255-87bd-0052-a0b1-9b0b4ba71c24", 00:08:14.047 "is_configured": true, 00:08:14.047 "data_offset": 2048, 00:08:14.047 "data_size": 63488 00:08:14.047 }, 00:08:14.047 { 00:08:14.047 "name": "BaseBdev2", 00:08:14.047 "uuid": "57f682f2-a6df-9e57-8c4b-30e9e84f22fc", 00:08:14.047 "is_configured": true, 00:08:14.047 "data_offset": 2048, 00:08:14.047 "data_size": 63488 00:08:14.047 } 00:08:14.047 ] 00:08:14.047 }' 00:08:14.047 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.047 02:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.320 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:14.320 02:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:14.320 [2024-07-25 02:33:01.083811] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x353221ca0ec0 00:08:15.259 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:15.518 [2024-07-25 02:33:02.183850] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:15.518 [2024-07-25 02:33:02.183913] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.518 [2024-07-25 02:33:02.184034] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x353221ca0ec0 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:15.518 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.519 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.519 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:15.519 "name": "raid_bdev1", 00:08:15.519 "uuid": "35e880b3-4a2e-11ef-9c8e-7947904e2597", 00:08:15.519 "strip_size_kb": 0, 00:08:15.519 "state": "online", 00:08:15.519 "raid_level": "raid1", 00:08:15.519 "superblock": true, 00:08:15.519 "num_base_bdevs": 2, 00:08:15.519 "num_base_bdevs_discovered": 1, 00:08:15.519 "num_base_bdevs_operational": 1, 00:08:15.519 "base_bdevs_list": [ 00:08:15.519 { 00:08:15.519 "name": null, 00:08:15.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.519 "is_configured": false, 00:08:15.519 "data_offset": 2048, 00:08:15.519 "data_size": 63488 00:08:15.519 }, 00:08:15.519 { 00:08:15.519 "name": "BaseBdev2", 00:08:15.519 "uuid": "57f682f2-a6df-9e57-8c4b-30e9e84f22fc", 00:08:15.519 "is_configured": true, 00:08:15.519 "data_offset": 2048, 00:08:15.519 "data_size": 63488 00:08:15.519 } 00:08:15.519 ] 00:08:15.519 }' 00:08:15.519 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:15.519 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:16.088 [2024-07-25 02:33:02.849098] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.088 [2024-07-25 02:33:02.849124] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.088 [2024-07-25 02:33:02.849361] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.088 [2024-07-25 02:33:02.849367] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.088 [2024-07-25 02:33:02.849376] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.088 [2024-07-25 02:33:02.849380] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x353221c34f00 name raid_bdev1, state offline 00:08:16.088 0 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51748 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51748 ']' 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51748 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51748 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:16.088 killing process with pid 51748 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51748' 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51748 00:08:16.088 [2024-07-25 02:33:02.894655] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.088 02:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51748 00:08:16.088 [2024-07-25 02:33:02.903425] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qyyKJgLbv6 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:16.348 00:08:16.348 real 0m4.734s 00:08:16.348 user 0m6.931s 00:08:16.348 sys 0m0.877s 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.348 02:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.348 ************************************ 00:08:16.348 END TEST raid_write_error_test 00:08:16.349 ************************************ 00:08:16.349 02:33:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:16.349 02:33:03 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:08:16.349 02:33:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:16.349 02:33:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:16.349 02:33:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:16.349 02:33:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.349 02:33:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.349 ************************************ 00:08:16.349 START TEST raid_state_function_test 00:08:16.349 ************************************ 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51866 00:08:16.349 Process raid pid: 51866 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51866' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51866 /var/tmp/spdk-raid.sock 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51866 ']' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.349 02:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.349 [2024-07-25 02:33:03.149067] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:08:16.349 [2024-07-25 02:33:03.149420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:16.922 EAL: TSC is not safe to use in SMP mode 00:08:16.922 EAL: TSC is not invariant 00:08:16.922 [2024-07-25 02:33:03.570709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.922 [2024-07-25 02:33:03.661696] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:16.922 [2024-07-25 02:33:03.663352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.922 [2024-07-25 02:33:03.663970] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.922 [2024-07-25 02:33:03.663980] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.182 02:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.182 02:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:17.182 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:17.441 [2024-07-25 02:33:04.219097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:17.441 [2024-07-25 02:33:04.219131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:17.441 [2024-07-25 02:33:04.219151] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.441 [2024-07-25 02:33:04.219156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.441 [2024-07-25 02:33:04.219159] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:17.441 [2024-07-25 02:33:04.219164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.441 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.701 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.701 "name": "Existed_Raid", 00:08:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.701 "strip_size_kb": 64, 00:08:17.701 "state": "configuring", 00:08:17.701 "raid_level": "raid0", 00:08:17.701 "superblock": false, 00:08:17.701 "num_base_bdevs": 3, 00:08:17.701 "num_base_bdevs_discovered": 0, 00:08:17.701 "num_base_bdevs_operational": 3, 00:08:17.701 "base_bdevs_list": [ 00:08:17.701 { 00:08:17.701 "name": "BaseBdev1", 00:08:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.701 "is_configured": false, 00:08:17.701 "data_offset": 0, 00:08:17.701 "data_size": 0 00:08:17.701 }, 00:08:17.701 { 00:08:17.701 "name": "BaseBdev2", 00:08:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.701 "is_configured": false, 00:08:17.701 "data_offset": 0, 00:08:17.701 "data_size": 0 00:08:17.701 }, 00:08:17.701 { 00:08:17.701 "name": "BaseBdev3", 00:08:17.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.701 "is_configured": false, 00:08:17.701 "data_offset": 0, 00:08:17.701 "data_size": 0 00:08:17.701 } 00:08:17.701 ] 00:08:17.701 }' 00:08:17.701 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.701 02:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.961 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:17.961 [2024-07-25 02:33:04.855281] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.961 [2024-07-25 02:33:04.855298] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209a58034500 name Existed_Raid, state configuring 00:08:18.220 02:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:18.220 [2024-07-25 02:33:05.035346] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:18.220 [2024-07-25 02:33:05.035371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:18.220 [2024-07-25 02:33:05.035374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:18.220 [2024-07-25 02:33:05.035395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:18.220 [2024-07-25 02:33:05.035398] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:18.220 [2024-07-25 02:33:05.035402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:18.220 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:18.479 [2024-07-25 02:33:05.220157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.479 BaseBdev1 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:18.479 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:18.739 [ 00:08:18.739 { 00:08:18.739 "name": "BaseBdev1", 00:08:18.739 "aliases": [ 00:08:18.739 "38b64189-4a2e-11ef-9c8e-7947904e2597" 00:08:18.739 ], 00:08:18.739 "product_name": "Malloc disk", 00:08:18.739 "block_size": 512, 00:08:18.739 "num_blocks": 65536, 00:08:18.739 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:18.739 "assigned_rate_limits": { 00:08:18.739 "rw_ios_per_sec": 0, 00:08:18.739 "rw_mbytes_per_sec": 0, 00:08:18.739 "r_mbytes_per_sec": 0, 00:08:18.739 "w_mbytes_per_sec": 0 00:08:18.739 }, 00:08:18.739 "claimed": true, 00:08:18.739 "claim_type": "exclusive_write", 00:08:18.739 "zoned": false, 00:08:18.739 "supported_io_types": { 00:08:18.739 "read": true, 00:08:18.739 "write": true, 00:08:18.739 "unmap": true, 00:08:18.739 "flush": true, 00:08:18.739 "reset": true, 00:08:18.739 "nvme_admin": false, 00:08:18.739 "nvme_io": false, 00:08:18.739 "nvme_io_md": false, 00:08:18.739 "write_zeroes": true, 00:08:18.739 "zcopy": true, 00:08:18.739 "get_zone_info": false, 00:08:18.739 "zone_management": false, 00:08:18.739 "zone_append": false, 00:08:18.739 "compare": false, 00:08:18.739 "compare_and_write": false, 00:08:18.739 "abort": true, 00:08:18.739 "seek_hole": false, 00:08:18.739 "seek_data": false, 00:08:18.739 "copy": true, 00:08:18.739 "nvme_iov_md": false 00:08:18.739 }, 00:08:18.739 "memory_domains": [ 00:08:18.739 { 00:08:18.739 "dma_device_id": "system", 00:08:18.739 "dma_device_type": 1 00:08:18.739 }, 00:08:18.739 { 00:08:18.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.739 "dma_device_type": 2 00:08:18.739 } 00:08:18.739 ], 00:08:18.739 "driver_specific": {} 00:08:18.739 } 00:08:18.739 ] 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.739 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.999 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:18.999 "name": "Existed_Raid", 00:08:18.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.999 "strip_size_kb": 64, 00:08:18.999 "state": "configuring", 00:08:18.999 "raid_level": "raid0", 00:08:18.999 "superblock": false, 00:08:18.999 "num_base_bdevs": 3, 00:08:18.999 "num_base_bdevs_discovered": 1, 00:08:18.999 "num_base_bdevs_operational": 3, 00:08:18.999 "base_bdevs_list": [ 00:08:18.999 { 00:08:18.999 "name": "BaseBdev1", 00:08:18.999 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:18.999 "is_configured": true, 00:08:18.999 "data_offset": 0, 00:08:18.999 "data_size": 65536 00:08:18.999 }, 00:08:18.999 { 00:08:18.999 "name": "BaseBdev2", 00:08:18.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.999 "is_configured": false, 00:08:18.999 "data_offset": 0, 00:08:18.999 "data_size": 0 00:08:18.999 }, 00:08:18.999 { 00:08:18.999 "name": "BaseBdev3", 00:08:18.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.999 "is_configured": false, 00:08:18.999 "data_offset": 0, 00:08:18.999 "data_size": 0 00:08:18.999 } 00:08:18.999 ] 00:08:18.999 }' 00:08:18.999 02:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:18.999 02:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.259 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:19.520 [2024-07-25 02:33:06.207702] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.520 [2024-07-25 02:33:06.207719] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209a58034500 name Existed_Raid, state configuring 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:19.520 [2024-07-25 02:33:06.387767] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.520 [2024-07-25 02:33:06.388365] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.520 [2024-07-25 02:33:06.388398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.520 [2024-07-25 02:33:06.388401] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.520 [2024-07-25 02:33:06.388407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:19.520 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.521 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.781 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:19.781 "name": "Existed_Raid", 00:08:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.781 "strip_size_kb": 64, 00:08:19.781 "state": "configuring", 00:08:19.781 "raid_level": "raid0", 00:08:19.781 "superblock": false, 00:08:19.781 "num_base_bdevs": 3, 00:08:19.781 "num_base_bdevs_discovered": 1, 00:08:19.781 "num_base_bdevs_operational": 3, 00:08:19.781 "base_bdevs_list": [ 00:08:19.781 { 00:08:19.781 "name": "BaseBdev1", 00:08:19.781 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:19.781 "is_configured": true, 00:08:19.781 "data_offset": 0, 00:08:19.781 "data_size": 65536 00:08:19.781 }, 00:08:19.781 { 00:08:19.781 "name": "BaseBdev2", 00:08:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.781 "is_configured": false, 00:08:19.781 "data_offset": 0, 00:08:19.781 "data_size": 0 00:08:19.781 }, 00:08:19.781 { 00:08:19.781 "name": "BaseBdev3", 00:08:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.781 "is_configured": false, 00:08:19.781 "data_offset": 0, 00:08:19.781 "data_size": 0 00:08:19.781 } 00:08:19.781 ] 00:08:19.781 }' 00:08:19.781 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:19.781 02:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.040 02:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.299 [2024-07-25 02:33:07.024079] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.300 BaseBdev2 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:20.300 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.560 [ 00:08:20.560 { 00:08:20.560 "name": "BaseBdev2", 00:08:20.560 "aliases": [ 00:08:20.560 "39c99c22-4a2e-11ef-9c8e-7947904e2597" 00:08:20.560 ], 00:08:20.560 "product_name": "Malloc disk", 00:08:20.560 "block_size": 512, 00:08:20.560 "num_blocks": 65536, 00:08:20.560 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:20.560 "assigned_rate_limits": { 00:08:20.560 "rw_ios_per_sec": 0, 00:08:20.560 "rw_mbytes_per_sec": 0, 00:08:20.560 "r_mbytes_per_sec": 0, 00:08:20.560 "w_mbytes_per_sec": 0 00:08:20.560 }, 00:08:20.560 "claimed": true, 00:08:20.560 "claim_type": "exclusive_write", 00:08:20.560 "zoned": false, 00:08:20.560 "supported_io_types": { 00:08:20.560 "read": true, 00:08:20.560 "write": true, 00:08:20.560 "unmap": true, 00:08:20.560 "flush": true, 00:08:20.560 "reset": true, 00:08:20.560 "nvme_admin": false, 00:08:20.560 "nvme_io": false, 00:08:20.560 "nvme_io_md": false, 00:08:20.560 "write_zeroes": true, 00:08:20.560 "zcopy": true, 00:08:20.560 "get_zone_info": false, 00:08:20.560 "zone_management": false, 00:08:20.560 "zone_append": false, 00:08:20.560 "compare": false, 00:08:20.560 "compare_and_write": false, 00:08:20.560 "abort": true, 00:08:20.560 "seek_hole": false, 00:08:20.560 "seek_data": false, 00:08:20.560 "copy": true, 00:08:20.560 "nvme_iov_md": false 00:08:20.560 }, 00:08:20.560 "memory_domains": [ 00:08:20.560 { 00:08:20.560 "dma_device_id": "system", 00:08:20.560 "dma_device_type": 1 00:08:20.560 }, 00:08:20.560 { 00:08:20.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.560 "dma_device_type": 2 00:08:20.560 } 00:08:20.560 ], 00:08:20.560 "driver_specific": {} 00:08:20.560 } 00:08:20.560 ] 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.560 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.819 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.819 "name": "Existed_Raid", 00:08:20.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.819 "strip_size_kb": 64, 00:08:20.819 "state": "configuring", 00:08:20.819 "raid_level": "raid0", 00:08:20.819 "superblock": false, 00:08:20.819 "num_base_bdevs": 3, 00:08:20.819 "num_base_bdevs_discovered": 2, 00:08:20.819 "num_base_bdevs_operational": 3, 00:08:20.819 "base_bdevs_list": [ 00:08:20.819 { 00:08:20.819 "name": "BaseBdev1", 00:08:20.819 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:20.819 "is_configured": true, 00:08:20.819 "data_offset": 0, 00:08:20.819 "data_size": 65536 00:08:20.819 }, 00:08:20.819 { 00:08:20.819 "name": "BaseBdev2", 00:08:20.819 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:20.819 "is_configured": true, 00:08:20.819 "data_offset": 0, 00:08:20.819 "data_size": 65536 00:08:20.819 }, 00:08:20.819 { 00:08:20.819 "name": "BaseBdev3", 00:08:20.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.819 "is_configured": false, 00:08:20.819 "data_offset": 0, 00:08:20.819 "data_size": 0 00:08:20.819 } 00:08:20.819 ] 00:08:20.819 }' 00:08:20.819 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.820 02:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.079 02:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.339 [2024-07-25 02:33:08.052378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.339 [2024-07-25 02:33:08.052397] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x209a58034a00 00:08:21.339 [2024-07-25 02:33:08.052400] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:21.339 [2024-07-25 02:33:08.052416] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209a58097e20 00:08:21.339 [2024-07-25 02:33:08.052485] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209a58034a00 00:08:21.339 [2024-07-25 02:33:08.052488] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x209a58034a00 00:08:21.339 [2024-07-25 02:33:08.052512] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.339 BaseBdev3 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:21.339 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:21.599 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.599 [ 00:08:21.599 { 00:08:21.599 "name": "BaseBdev3", 00:08:21.599 "aliases": [ 00:08:21.599 "3a6684f7-4a2e-11ef-9c8e-7947904e2597" 00:08:21.599 ], 00:08:21.599 "product_name": "Malloc disk", 00:08:21.599 "block_size": 512, 00:08:21.599 "num_blocks": 65536, 00:08:21.599 "uuid": "3a6684f7-4a2e-11ef-9c8e-7947904e2597", 00:08:21.599 "assigned_rate_limits": { 00:08:21.599 "rw_ios_per_sec": 0, 00:08:21.599 "rw_mbytes_per_sec": 0, 00:08:21.599 "r_mbytes_per_sec": 0, 00:08:21.599 "w_mbytes_per_sec": 0 00:08:21.599 }, 00:08:21.599 "claimed": true, 00:08:21.599 "claim_type": "exclusive_write", 00:08:21.599 "zoned": false, 00:08:21.600 "supported_io_types": { 00:08:21.600 "read": true, 00:08:21.600 "write": true, 00:08:21.600 "unmap": true, 00:08:21.600 "flush": true, 00:08:21.600 "reset": true, 00:08:21.600 "nvme_admin": false, 00:08:21.600 "nvme_io": false, 00:08:21.600 "nvme_io_md": false, 00:08:21.600 "write_zeroes": true, 00:08:21.600 "zcopy": true, 00:08:21.600 "get_zone_info": false, 00:08:21.600 "zone_management": false, 00:08:21.600 "zone_append": false, 00:08:21.600 "compare": false, 00:08:21.600 "compare_and_write": false, 00:08:21.600 "abort": true, 00:08:21.600 "seek_hole": false, 00:08:21.600 "seek_data": false, 00:08:21.600 "copy": true, 00:08:21.600 "nvme_iov_md": false 00:08:21.600 }, 00:08:21.600 "memory_domains": [ 00:08:21.600 { 00:08:21.600 "dma_device_id": "system", 00:08:21.600 "dma_device_type": 1 00:08:21.600 }, 00:08:21.600 { 00:08:21.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.600 "dma_device_type": 2 00:08:21.600 } 00:08:21.600 ], 00:08:21.600 "driver_specific": {} 00:08:21.600 } 00:08:21.600 ] 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.600 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.866 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:21.866 "name": "Existed_Raid", 00:08:21.866 "uuid": "3a66890b-4a2e-11ef-9c8e-7947904e2597", 00:08:21.866 "strip_size_kb": 64, 00:08:21.866 "state": "online", 00:08:21.866 "raid_level": "raid0", 00:08:21.866 "superblock": false, 00:08:21.866 "num_base_bdevs": 3, 00:08:21.866 "num_base_bdevs_discovered": 3, 00:08:21.866 "num_base_bdevs_operational": 3, 00:08:21.866 "base_bdevs_list": [ 00:08:21.866 { 00:08:21.866 "name": "BaseBdev1", 00:08:21.866 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:21.866 "is_configured": true, 00:08:21.866 "data_offset": 0, 00:08:21.866 "data_size": 65536 00:08:21.866 }, 00:08:21.866 { 00:08:21.866 "name": "BaseBdev2", 00:08:21.866 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:21.866 "is_configured": true, 00:08:21.866 "data_offset": 0, 00:08:21.866 "data_size": 65536 00:08:21.866 }, 00:08:21.866 { 00:08:21.866 "name": "BaseBdev3", 00:08:21.866 "uuid": "3a6684f7-4a2e-11ef-9c8e-7947904e2597", 00:08:21.866 "is_configured": true, 00:08:21.866 "data_offset": 0, 00:08:21.866 "data_size": 65536 00:08:21.866 } 00:08:21.866 ] 00:08:21.866 }' 00:08:21.866 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:21.866 02:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:22.138 02:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:22.398 [2024-07-25 02:33:09.080647] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:22.398 "name": "Existed_Raid", 00:08:22.398 "aliases": [ 00:08:22.398 "3a66890b-4a2e-11ef-9c8e-7947904e2597" 00:08:22.398 ], 00:08:22.398 "product_name": "Raid Volume", 00:08:22.398 "block_size": 512, 00:08:22.398 "num_blocks": 196608, 00:08:22.398 "uuid": "3a66890b-4a2e-11ef-9c8e-7947904e2597", 00:08:22.398 "assigned_rate_limits": { 00:08:22.398 "rw_ios_per_sec": 0, 00:08:22.398 "rw_mbytes_per_sec": 0, 00:08:22.398 "r_mbytes_per_sec": 0, 00:08:22.398 "w_mbytes_per_sec": 0 00:08:22.398 }, 00:08:22.398 "claimed": false, 00:08:22.398 "zoned": false, 00:08:22.398 "supported_io_types": { 00:08:22.398 "read": true, 00:08:22.398 "write": true, 00:08:22.398 "unmap": true, 00:08:22.398 "flush": true, 00:08:22.398 "reset": true, 00:08:22.398 "nvme_admin": false, 00:08:22.398 "nvme_io": false, 00:08:22.398 "nvme_io_md": false, 00:08:22.398 "write_zeroes": true, 00:08:22.398 "zcopy": false, 00:08:22.398 "get_zone_info": false, 00:08:22.398 "zone_management": false, 00:08:22.398 "zone_append": false, 00:08:22.398 "compare": false, 00:08:22.398 "compare_and_write": false, 00:08:22.398 "abort": false, 00:08:22.398 "seek_hole": false, 00:08:22.398 "seek_data": false, 00:08:22.398 "copy": false, 00:08:22.398 "nvme_iov_md": false 00:08:22.398 }, 00:08:22.398 "memory_domains": [ 00:08:22.398 { 00:08:22.398 "dma_device_id": "system", 00:08:22.398 "dma_device_type": 1 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.398 "dma_device_type": 2 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "dma_device_id": "system", 00:08:22.398 "dma_device_type": 1 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.398 "dma_device_type": 2 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "dma_device_id": "system", 00:08:22.398 "dma_device_type": 1 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.398 "dma_device_type": 2 00:08:22.398 } 00:08:22.398 ], 00:08:22.398 "driver_specific": { 00:08:22.398 "raid": { 00:08:22.398 "uuid": "3a66890b-4a2e-11ef-9c8e-7947904e2597", 00:08:22.398 "strip_size_kb": 64, 00:08:22.398 "state": "online", 00:08:22.398 "raid_level": "raid0", 00:08:22.398 "superblock": false, 00:08:22.398 "num_base_bdevs": 3, 00:08:22.398 "num_base_bdevs_discovered": 3, 00:08:22.398 "num_base_bdevs_operational": 3, 00:08:22.398 "base_bdevs_list": [ 00:08:22.398 { 00:08:22.398 "name": "BaseBdev1", 00:08:22.398 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:22.398 "is_configured": true, 00:08:22.398 "data_offset": 0, 00:08:22.398 "data_size": 65536 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "name": "BaseBdev2", 00:08:22.398 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:22.398 "is_configured": true, 00:08:22.398 "data_offset": 0, 00:08:22.398 "data_size": 65536 00:08:22.398 }, 00:08:22.398 { 00:08:22.398 "name": "BaseBdev3", 00:08:22.398 "uuid": "3a6684f7-4a2e-11ef-9c8e-7947904e2597", 00:08:22.398 "is_configured": true, 00:08:22.398 "data_offset": 0, 00:08:22.398 "data_size": 65536 00:08:22.398 } 00:08:22.398 ] 00:08:22.398 } 00:08:22.398 } 00:08:22.398 }' 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:22.398 BaseBdev2 00:08:22.398 BaseBdev3' 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:22.398 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:22.658 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:22.658 "name": "BaseBdev1", 00:08:22.658 "aliases": [ 00:08:22.658 "38b64189-4a2e-11ef-9c8e-7947904e2597" 00:08:22.658 ], 00:08:22.658 "product_name": "Malloc disk", 00:08:22.658 "block_size": 512, 00:08:22.658 "num_blocks": 65536, 00:08:22.658 "uuid": "38b64189-4a2e-11ef-9c8e-7947904e2597", 00:08:22.658 "assigned_rate_limits": { 00:08:22.659 "rw_ios_per_sec": 0, 00:08:22.659 "rw_mbytes_per_sec": 0, 00:08:22.659 "r_mbytes_per_sec": 0, 00:08:22.659 "w_mbytes_per_sec": 0 00:08:22.659 }, 00:08:22.659 "claimed": true, 00:08:22.659 "claim_type": "exclusive_write", 00:08:22.659 "zoned": false, 00:08:22.659 "supported_io_types": { 00:08:22.659 "read": true, 00:08:22.659 "write": true, 00:08:22.659 "unmap": true, 00:08:22.659 "flush": true, 00:08:22.659 "reset": true, 00:08:22.659 "nvme_admin": false, 00:08:22.659 "nvme_io": false, 00:08:22.659 "nvme_io_md": false, 00:08:22.659 "write_zeroes": true, 00:08:22.659 "zcopy": true, 00:08:22.659 "get_zone_info": false, 00:08:22.659 "zone_management": false, 00:08:22.659 "zone_append": false, 00:08:22.659 "compare": false, 00:08:22.659 "compare_and_write": false, 00:08:22.659 "abort": true, 00:08:22.659 "seek_hole": false, 00:08:22.659 "seek_data": false, 00:08:22.659 "copy": true, 00:08:22.659 "nvme_iov_md": false 00:08:22.659 }, 00:08:22.659 "memory_domains": [ 00:08:22.659 { 00:08:22.659 "dma_device_id": "system", 00:08:22.659 "dma_device_type": 1 00:08:22.659 }, 00:08:22.659 { 00:08:22.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.659 "dma_device_type": 2 00:08:22.659 } 00:08:22.659 ], 00:08:22.659 "driver_specific": {} 00:08:22.659 }' 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:22.659 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:22.919 "name": "BaseBdev2", 00:08:22.919 "aliases": [ 00:08:22.919 "39c99c22-4a2e-11ef-9c8e-7947904e2597" 00:08:22.919 ], 00:08:22.919 "product_name": "Malloc disk", 00:08:22.919 "block_size": 512, 00:08:22.919 "num_blocks": 65536, 00:08:22.919 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:22.919 "assigned_rate_limits": { 00:08:22.919 "rw_ios_per_sec": 0, 00:08:22.919 "rw_mbytes_per_sec": 0, 00:08:22.919 "r_mbytes_per_sec": 0, 00:08:22.919 "w_mbytes_per_sec": 0 00:08:22.919 }, 00:08:22.919 "claimed": true, 00:08:22.919 "claim_type": "exclusive_write", 00:08:22.919 "zoned": false, 00:08:22.919 "supported_io_types": { 00:08:22.919 "read": true, 00:08:22.919 "write": true, 00:08:22.919 "unmap": true, 00:08:22.919 "flush": true, 00:08:22.919 "reset": true, 00:08:22.919 "nvme_admin": false, 00:08:22.919 "nvme_io": false, 00:08:22.919 "nvme_io_md": false, 00:08:22.919 "write_zeroes": true, 00:08:22.919 "zcopy": true, 00:08:22.919 "get_zone_info": false, 00:08:22.919 "zone_management": false, 00:08:22.919 "zone_append": false, 00:08:22.919 "compare": false, 00:08:22.919 "compare_and_write": false, 00:08:22.919 "abort": true, 00:08:22.919 "seek_hole": false, 00:08:22.919 "seek_data": false, 00:08:22.919 "copy": true, 00:08:22.919 "nvme_iov_md": false 00:08:22.919 }, 00:08:22.919 "memory_domains": [ 00:08:22.919 { 00:08:22.919 "dma_device_id": "system", 00:08:22.919 "dma_device_type": 1 00:08:22.919 }, 00:08:22.919 { 00:08:22.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.919 "dma_device_type": 2 00:08:22.919 } 00:08:22.919 ], 00:08:22.919 "driver_specific": {} 00:08:22.919 }' 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:22.919 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:23.179 "name": "BaseBdev3", 00:08:23.179 "aliases": [ 00:08:23.179 "3a6684f7-4a2e-11ef-9c8e-7947904e2597" 00:08:23.179 ], 00:08:23.179 "product_name": "Malloc disk", 00:08:23.179 "block_size": 512, 00:08:23.179 "num_blocks": 65536, 00:08:23.179 "uuid": "3a6684f7-4a2e-11ef-9c8e-7947904e2597", 00:08:23.179 "assigned_rate_limits": { 00:08:23.179 "rw_ios_per_sec": 0, 00:08:23.179 "rw_mbytes_per_sec": 0, 00:08:23.179 "r_mbytes_per_sec": 0, 00:08:23.179 "w_mbytes_per_sec": 0 00:08:23.179 }, 00:08:23.179 "claimed": true, 00:08:23.179 "claim_type": "exclusive_write", 00:08:23.179 "zoned": false, 00:08:23.179 "supported_io_types": { 00:08:23.179 "read": true, 00:08:23.179 "write": true, 00:08:23.179 "unmap": true, 00:08:23.179 "flush": true, 00:08:23.179 "reset": true, 00:08:23.179 "nvme_admin": false, 00:08:23.179 "nvme_io": false, 00:08:23.179 "nvme_io_md": false, 00:08:23.179 "write_zeroes": true, 00:08:23.179 "zcopy": true, 00:08:23.179 "get_zone_info": false, 00:08:23.179 "zone_management": false, 00:08:23.179 "zone_append": false, 00:08:23.179 "compare": false, 00:08:23.179 "compare_and_write": false, 00:08:23.179 "abort": true, 00:08:23.179 "seek_hole": false, 00:08:23.179 "seek_data": false, 00:08:23.179 "copy": true, 00:08:23.179 "nvme_iov_md": false 00:08:23.179 }, 00:08:23.179 "memory_domains": [ 00:08:23.179 { 00:08:23.179 "dma_device_id": "system", 00:08:23.179 "dma_device_type": 1 00:08:23.179 }, 00:08:23.179 { 00:08:23.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.179 "dma_device_type": 2 00:08:23.179 } 00:08:23.179 ], 00:08:23.179 "driver_specific": {} 00:08:23.179 }' 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:23.179 02:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:23.439 [2024-07-25 02:33:10.124933] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.439 [2024-07-25 02:33:10.124957] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.439 [2024-07-25 02:33:10.124967] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.439 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:23.439 "name": "Existed_Raid", 00:08:23.439 "uuid": "3a66890b-4a2e-11ef-9c8e-7947904e2597", 00:08:23.439 "strip_size_kb": 64, 00:08:23.439 "state": "offline", 00:08:23.439 "raid_level": "raid0", 00:08:23.439 "superblock": false, 00:08:23.439 "num_base_bdevs": 3, 00:08:23.439 "num_base_bdevs_discovered": 2, 00:08:23.439 "num_base_bdevs_operational": 2, 00:08:23.439 "base_bdevs_list": [ 00:08:23.439 { 00:08:23.439 "name": null, 00:08:23.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.439 "is_configured": false, 00:08:23.439 "data_offset": 0, 00:08:23.439 "data_size": 65536 00:08:23.439 }, 00:08:23.439 { 00:08:23.439 "name": "BaseBdev2", 00:08:23.440 "uuid": "39c99c22-4a2e-11ef-9c8e-7947904e2597", 00:08:23.440 "is_configured": true, 00:08:23.440 "data_offset": 0, 00:08:23.440 "data_size": 65536 00:08:23.440 }, 00:08:23.440 { 00:08:23.440 "name": "BaseBdev3", 00:08:23.440 "uuid": "3a6684f7-4a2e-11ef-9c8e-7947904e2597", 00:08:23.440 "is_configured": true, 00:08:23.440 "data_offset": 0, 00:08:23.440 "data_size": 65536 00:08:23.440 } 00:08:23.440 ] 00:08:23.440 }' 00:08:23.440 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:23.440 02:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.009 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:24.269 [2024-07-25 02:33:10.965764] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.269 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:24.269 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:24.269 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.269 02:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:24.528 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:24.529 [2024-07-25 02:33:11.334422] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.529 [2024-07-25 02:33:11.334438] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209a58034a00 name Existed_Raid, state offline 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.529 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:24.789 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.789 BaseBdev2 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:25.047 02:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.306 [ 00:08:25.306 { 00:08:25.306 "name": "BaseBdev2", 00:08:25.306 "aliases": [ 00:08:25.306 "3c8fefa3-4a2e-11ef-9c8e-7947904e2597" 00:08:25.306 ], 00:08:25.306 "product_name": "Malloc disk", 00:08:25.306 "block_size": 512, 00:08:25.306 "num_blocks": 65536, 00:08:25.306 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:25.306 "assigned_rate_limits": { 00:08:25.306 "rw_ios_per_sec": 0, 00:08:25.306 "rw_mbytes_per_sec": 0, 00:08:25.306 "r_mbytes_per_sec": 0, 00:08:25.306 "w_mbytes_per_sec": 0 00:08:25.306 }, 00:08:25.306 "claimed": false, 00:08:25.306 "zoned": false, 00:08:25.306 "supported_io_types": { 00:08:25.306 "read": true, 00:08:25.306 "write": true, 00:08:25.306 "unmap": true, 00:08:25.306 "flush": true, 00:08:25.306 "reset": true, 00:08:25.306 "nvme_admin": false, 00:08:25.306 "nvme_io": false, 00:08:25.306 "nvme_io_md": false, 00:08:25.306 "write_zeroes": true, 00:08:25.306 "zcopy": true, 00:08:25.306 "get_zone_info": false, 00:08:25.306 "zone_management": false, 00:08:25.306 "zone_append": false, 00:08:25.306 "compare": false, 00:08:25.306 "compare_and_write": false, 00:08:25.306 "abort": true, 00:08:25.306 "seek_hole": false, 00:08:25.306 "seek_data": false, 00:08:25.306 "copy": true, 00:08:25.306 "nvme_iov_md": false 00:08:25.306 }, 00:08:25.306 "memory_domains": [ 00:08:25.306 { 00:08:25.306 "dma_device_id": "system", 00:08:25.306 "dma_device_type": 1 00:08:25.306 }, 00:08:25.306 { 00:08:25.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.306 "dma_device_type": 2 00:08:25.306 } 00:08:25.306 ], 00:08:25.306 "driver_specific": {} 00:08:25.306 } 00:08:25.306 ] 00:08:25.306 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:25.306 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:25.306 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:25.306 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.565 BaseBdev3 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:25.565 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.825 [ 00:08:25.825 { 00:08:25.825 "name": "BaseBdev3", 00:08:25.825 "aliases": [ 00:08:25.825 "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597" 00:08:25.825 ], 00:08:25.825 "product_name": "Malloc disk", 00:08:25.825 "block_size": 512, 00:08:25.825 "num_blocks": 65536, 00:08:25.825 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:25.825 "assigned_rate_limits": { 00:08:25.825 "rw_ios_per_sec": 0, 00:08:25.825 "rw_mbytes_per_sec": 0, 00:08:25.825 "r_mbytes_per_sec": 0, 00:08:25.825 "w_mbytes_per_sec": 0 00:08:25.825 }, 00:08:25.825 "claimed": false, 00:08:25.825 "zoned": false, 00:08:25.825 "supported_io_types": { 00:08:25.825 "read": true, 00:08:25.825 "write": true, 00:08:25.825 "unmap": true, 00:08:25.825 "flush": true, 00:08:25.825 "reset": true, 00:08:25.825 "nvme_admin": false, 00:08:25.825 "nvme_io": false, 00:08:25.825 "nvme_io_md": false, 00:08:25.825 "write_zeroes": true, 00:08:25.825 "zcopy": true, 00:08:25.825 "get_zone_info": false, 00:08:25.825 "zone_management": false, 00:08:25.825 "zone_append": false, 00:08:25.825 "compare": false, 00:08:25.825 "compare_and_write": false, 00:08:25.825 "abort": true, 00:08:25.825 "seek_hole": false, 00:08:25.825 "seek_data": false, 00:08:25.825 "copy": true, 00:08:25.825 "nvme_iov_md": false 00:08:25.825 }, 00:08:25.825 "memory_domains": [ 00:08:25.825 { 00:08:25.825 "dma_device_id": "system", 00:08:25.825 "dma_device_type": 1 00:08:25.825 }, 00:08:25.825 { 00:08:25.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.825 "dma_device_type": 2 00:08:25.825 } 00:08:25.825 ], 00:08:25.825 "driver_specific": {} 00:08:25.825 } 00:08:25.825 ] 00:08:25.825 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:25.825 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:25.825 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:25.825 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:26.085 [2024-07-25 02:33:12.759473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.085 [2024-07-25 02:33:12.759510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.085 [2024-07-25 02:33:12.759515] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.085 [2024-07-25 02:33:12.759901] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:26.085 "name": "Existed_Raid", 00:08:26.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.085 "strip_size_kb": 64, 00:08:26.085 "state": "configuring", 00:08:26.085 "raid_level": "raid0", 00:08:26.085 "superblock": false, 00:08:26.085 "num_base_bdevs": 3, 00:08:26.085 "num_base_bdevs_discovered": 2, 00:08:26.085 "num_base_bdevs_operational": 3, 00:08:26.085 "base_bdevs_list": [ 00:08:26.085 { 00:08:26.085 "name": "BaseBdev1", 00:08:26.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.085 "is_configured": false, 00:08:26.085 "data_offset": 0, 00:08:26.085 "data_size": 0 00:08:26.085 }, 00:08:26.085 { 00:08:26.085 "name": "BaseBdev2", 00:08:26.085 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:26.085 "is_configured": true, 00:08:26.085 "data_offset": 0, 00:08:26.085 "data_size": 65536 00:08:26.085 }, 00:08:26.085 { 00:08:26.085 "name": "BaseBdev3", 00:08:26.085 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:26.085 "is_configured": true, 00:08:26.085 "data_offset": 0, 00:08:26.085 "data_size": 65536 00:08:26.085 } 00:08:26.085 ] 00:08:26.085 }' 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:26.085 02:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.345 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:08:26.604 [2024-07-25 02:33:13.355636] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.604 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.863 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:26.863 "name": "Existed_Raid", 00:08:26.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.863 "strip_size_kb": 64, 00:08:26.863 "state": "configuring", 00:08:26.863 "raid_level": "raid0", 00:08:26.863 "superblock": false, 00:08:26.863 "num_base_bdevs": 3, 00:08:26.863 "num_base_bdevs_discovered": 1, 00:08:26.863 "num_base_bdevs_operational": 3, 00:08:26.863 "base_bdevs_list": [ 00:08:26.863 { 00:08:26.863 "name": "BaseBdev1", 00:08:26.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.863 "is_configured": false, 00:08:26.863 "data_offset": 0, 00:08:26.863 "data_size": 0 00:08:26.863 }, 00:08:26.863 { 00:08:26.863 "name": null, 00:08:26.863 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:26.863 "is_configured": false, 00:08:26.863 "data_offset": 0, 00:08:26.863 "data_size": 65536 00:08:26.863 }, 00:08:26.863 { 00:08:26.863 "name": "BaseBdev3", 00:08:26.863 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:26.863 "is_configured": true, 00:08:26.863 "data_offset": 0, 00:08:26.863 "data_size": 65536 00:08:26.863 } 00:08:26.863 ] 00:08:26.863 }' 00:08:26.863 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:26.863 02:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.122 02:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.122 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:08:27.122 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.381 [2024-07-25 02:33:14.179949] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.381 BaseBdev1 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:27.381 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:27.640 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.899 [ 00:08:27.899 { 00:08:27.899 "name": "BaseBdev1", 00:08:27.899 "aliases": [ 00:08:27.899 "3e0d82d7-4a2e-11ef-9c8e-7947904e2597" 00:08:27.899 ], 00:08:27.899 "product_name": "Malloc disk", 00:08:27.899 "block_size": 512, 00:08:27.899 "num_blocks": 65536, 00:08:27.899 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:27.899 "assigned_rate_limits": { 00:08:27.899 "rw_ios_per_sec": 0, 00:08:27.899 "rw_mbytes_per_sec": 0, 00:08:27.899 "r_mbytes_per_sec": 0, 00:08:27.899 "w_mbytes_per_sec": 0 00:08:27.899 }, 00:08:27.899 "claimed": true, 00:08:27.899 "claim_type": "exclusive_write", 00:08:27.899 "zoned": false, 00:08:27.899 "supported_io_types": { 00:08:27.899 "read": true, 00:08:27.899 "write": true, 00:08:27.899 "unmap": true, 00:08:27.899 "flush": true, 00:08:27.899 "reset": true, 00:08:27.899 "nvme_admin": false, 00:08:27.899 "nvme_io": false, 00:08:27.899 "nvme_io_md": false, 00:08:27.899 "write_zeroes": true, 00:08:27.899 "zcopy": true, 00:08:27.899 "get_zone_info": false, 00:08:27.899 "zone_management": false, 00:08:27.899 "zone_append": false, 00:08:27.899 "compare": false, 00:08:27.899 "compare_and_write": false, 00:08:27.899 "abort": true, 00:08:27.899 "seek_hole": false, 00:08:27.899 "seek_data": false, 00:08:27.899 "copy": true, 00:08:27.899 "nvme_iov_md": false 00:08:27.899 }, 00:08:27.899 "memory_domains": [ 00:08:27.899 { 00:08:27.899 "dma_device_id": "system", 00:08:27.899 "dma_device_type": 1 00:08:27.899 }, 00:08:27.899 { 00:08:27.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.899 "dma_device_type": 2 00:08:27.899 } 00:08:27.899 ], 00:08:27.899 "driver_specific": {} 00:08:27.899 } 00:08:27.899 ] 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.899 "name": "Existed_Raid", 00:08:27.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.899 "strip_size_kb": 64, 00:08:27.899 "state": "configuring", 00:08:27.899 "raid_level": "raid0", 00:08:27.899 "superblock": false, 00:08:27.899 "num_base_bdevs": 3, 00:08:27.899 "num_base_bdevs_discovered": 2, 00:08:27.899 "num_base_bdevs_operational": 3, 00:08:27.899 "base_bdevs_list": [ 00:08:27.899 { 00:08:27.899 "name": "BaseBdev1", 00:08:27.899 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:27.899 "is_configured": true, 00:08:27.899 "data_offset": 0, 00:08:27.899 "data_size": 65536 00:08:27.899 }, 00:08:27.899 { 00:08:27.899 "name": null, 00:08:27.899 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:27.899 "is_configured": false, 00:08:27.899 "data_offset": 0, 00:08:27.899 "data_size": 65536 00:08:27.899 }, 00:08:27.899 { 00:08:27.899 "name": "BaseBdev3", 00:08:27.899 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:27.899 "is_configured": true, 00:08:27.899 "data_offset": 0, 00:08:27.899 "data_size": 65536 00:08:27.899 } 00:08:27.899 ] 00:08:27.899 }' 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.899 02:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.159 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.159 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.418 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:08:28.418 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:08:28.677 [2024-07-25 02:33:15.356168] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:28.677 "name": "Existed_Raid", 00:08:28.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.677 "strip_size_kb": 64, 00:08:28.677 "state": "configuring", 00:08:28.677 "raid_level": "raid0", 00:08:28.677 "superblock": false, 00:08:28.677 "num_base_bdevs": 3, 00:08:28.677 "num_base_bdevs_discovered": 1, 00:08:28.677 "num_base_bdevs_operational": 3, 00:08:28.677 "base_bdevs_list": [ 00:08:28.677 { 00:08:28.677 "name": "BaseBdev1", 00:08:28.677 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:28.677 "is_configured": true, 00:08:28.677 "data_offset": 0, 00:08:28.677 "data_size": 65536 00:08:28.677 }, 00:08:28.677 { 00:08:28.677 "name": null, 00:08:28.677 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:28.677 "is_configured": false, 00:08:28.677 "data_offset": 0, 00:08:28.677 "data_size": 65536 00:08:28.677 }, 00:08:28.677 { 00:08:28.677 "name": null, 00:08:28.677 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:28.677 "is_configured": false, 00:08:28.677 "data_offset": 0, 00:08:28.677 "data_size": 65536 00:08:28.677 } 00:08:28.677 ] 00:08:28.677 }' 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:28.677 02:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.246 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.246 02:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:29.246 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:08:29.246 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:29.509 [2024-07-25 02:33:16.184378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.509 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.509 "name": "Existed_Raid", 00:08:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.509 "strip_size_kb": 64, 00:08:29.509 "state": "configuring", 00:08:29.509 "raid_level": "raid0", 00:08:29.509 "superblock": false, 00:08:29.509 "num_base_bdevs": 3, 00:08:29.509 "num_base_bdevs_discovered": 2, 00:08:29.509 "num_base_bdevs_operational": 3, 00:08:29.509 "base_bdevs_list": [ 00:08:29.509 { 00:08:29.509 "name": "BaseBdev1", 00:08:29.509 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:29.509 "is_configured": true, 00:08:29.509 "data_offset": 0, 00:08:29.509 "data_size": 65536 00:08:29.509 }, 00:08:29.509 { 00:08:29.509 "name": null, 00:08:29.509 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:29.509 "is_configured": false, 00:08:29.509 "data_offset": 0, 00:08:29.509 "data_size": 65536 00:08:29.509 }, 00:08:29.509 { 00:08:29.509 "name": "BaseBdev3", 00:08:29.509 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:29.509 "is_configured": true, 00:08:29.509 "data_offset": 0, 00:08:29.509 "data_size": 65536 00:08:29.509 } 00:08:29.509 ] 00:08:29.509 }' 00:08:29.510 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.510 02:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.774 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.774 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.034 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:08:30.034 02:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:30.293 [2024-07-25 02:33:17.020593] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.293 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.552 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:30.552 "name": "Existed_Raid", 00:08:30.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.552 "strip_size_kb": 64, 00:08:30.552 "state": "configuring", 00:08:30.552 "raid_level": "raid0", 00:08:30.552 "superblock": false, 00:08:30.552 "num_base_bdevs": 3, 00:08:30.552 "num_base_bdevs_discovered": 1, 00:08:30.552 "num_base_bdevs_operational": 3, 00:08:30.552 "base_bdevs_list": [ 00:08:30.552 { 00:08:30.552 "name": null, 00:08:30.552 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:30.552 "is_configured": false, 00:08:30.552 "data_offset": 0, 00:08:30.552 "data_size": 65536 00:08:30.552 }, 00:08:30.552 { 00:08:30.552 "name": null, 00:08:30.552 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:30.552 "is_configured": false, 00:08:30.552 "data_offset": 0, 00:08:30.552 "data_size": 65536 00:08:30.552 }, 00:08:30.552 { 00:08:30.552 "name": "BaseBdev3", 00:08:30.552 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:30.552 "is_configured": true, 00:08:30.552 "data_offset": 0, 00:08:30.552 "data_size": 65536 00:08:30.552 } 00:08:30.552 ] 00:08:30.552 }' 00:08:30.552 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:30.552 02:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.811 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.811 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.811 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:08:30.811 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:31.070 [2024-07-25 02:33:17.841361] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.070 02:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.330 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:31.330 "name": "Existed_Raid", 00:08:31.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.330 "strip_size_kb": 64, 00:08:31.330 "state": "configuring", 00:08:31.330 "raid_level": "raid0", 00:08:31.330 "superblock": false, 00:08:31.330 "num_base_bdevs": 3, 00:08:31.330 "num_base_bdevs_discovered": 2, 00:08:31.330 "num_base_bdevs_operational": 3, 00:08:31.330 "base_bdevs_list": [ 00:08:31.330 { 00:08:31.330 "name": null, 00:08:31.330 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:31.330 "is_configured": false, 00:08:31.330 "data_offset": 0, 00:08:31.330 "data_size": 65536 00:08:31.330 }, 00:08:31.330 { 00:08:31.330 "name": "BaseBdev2", 00:08:31.330 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:31.330 "is_configured": true, 00:08:31.330 "data_offset": 0, 00:08:31.330 "data_size": 65536 00:08:31.330 }, 00:08:31.330 { 00:08:31.330 "name": "BaseBdev3", 00:08:31.330 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:31.330 "is_configured": true, 00:08:31.330 "data_offset": 0, 00:08:31.330 "data_size": 65536 00:08:31.330 } 00:08:31.330 ] 00:08:31.330 }' 00:08:31.330 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:31.330 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.590 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.590 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:31.590 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:08:31.848 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.849 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:31.849 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3e0d82d7-4a2e-11ef-9c8e-7947904e2597 00:08:32.107 [2024-07-25 02:33:18.853711] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:32.108 [2024-07-25 02:33:18.853730] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x209a58034a00 00:08:32.108 [2024-07-25 02:33:18.853733] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:32.108 [2024-07-25 02:33:18.853766] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209a58097e20 00:08:32.108 [2024-07-25 02:33:18.853813] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209a58034a00 00:08:32.108 [2024-07-25 02:33:18.853816] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x209a58034a00 00:08:32.108 [2024-07-25 02:33:18.853841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.108 NewBaseBdev 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:32.108 02:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:32.367 [ 00:08:32.367 { 00:08:32.367 "name": "NewBaseBdev", 00:08:32.367 "aliases": [ 00:08:32.367 "3e0d82d7-4a2e-11ef-9c8e-7947904e2597" 00:08:32.367 ], 00:08:32.367 "product_name": "Malloc disk", 00:08:32.367 "block_size": 512, 00:08:32.367 "num_blocks": 65536, 00:08:32.367 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:32.367 "assigned_rate_limits": { 00:08:32.367 "rw_ios_per_sec": 0, 00:08:32.367 "rw_mbytes_per_sec": 0, 00:08:32.367 "r_mbytes_per_sec": 0, 00:08:32.367 "w_mbytes_per_sec": 0 00:08:32.367 }, 00:08:32.367 "claimed": true, 00:08:32.367 "claim_type": "exclusive_write", 00:08:32.367 "zoned": false, 00:08:32.367 "supported_io_types": { 00:08:32.367 "read": true, 00:08:32.367 "write": true, 00:08:32.367 "unmap": true, 00:08:32.367 "flush": true, 00:08:32.367 "reset": true, 00:08:32.367 "nvme_admin": false, 00:08:32.367 "nvme_io": false, 00:08:32.367 "nvme_io_md": false, 00:08:32.367 "write_zeroes": true, 00:08:32.367 "zcopy": true, 00:08:32.367 "get_zone_info": false, 00:08:32.367 "zone_management": false, 00:08:32.367 "zone_append": false, 00:08:32.367 "compare": false, 00:08:32.367 "compare_and_write": false, 00:08:32.367 "abort": true, 00:08:32.367 "seek_hole": false, 00:08:32.367 "seek_data": false, 00:08:32.367 "copy": true, 00:08:32.367 "nvme_iov_md": false 00:08:32.367 }, 00:08:32.367 "memory_domains": [ 00:08:32.367 { 00:08:32.367 "dma_device_id": "system", 00:08:32.367 "dma_device_type": 1 00:08:32.367 }, 00:08:32.367 { 00:08:32.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.367 "dma_device_type": 2 00:08:32.367 } 00:08:32.367 ], 00:08:32.367 "driver_specific": {} 00:08:32.367 } 00:08:32.367 ] 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.367 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.626 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:32.626 "name": "Existed_Raid", 00:08:32.626 "uuid": "40d6b027-4a2e-11ef-9c8e-7947904e2597", 00:08:32.626 "strip_size_kb": 64, 00:08:32.626 "state": "online", 00:08:32.626 "raid_level": "raid0", 00:08:32.626 "superblock": false, 00:08:32.626 "num_base_bdevs": 3, 00:08:32.626 "num_base_bdevs_discovered": 3, 00:08:32.626 "num_base_bdevs_operational": 3, 00:08:32.626 "base_bdevs_list": [ 00:08:32.626 { 00:08:32.626 "name": "NewBaseBdev", 00:08:32.626 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:32.626 "is_configured": true, 00:08:32.626 "data_offset": 0, 00:08:32.626 "data_size": 65536 00:08:32.626 }, 00:08:32.626 { 00:08:32.626 "name": "BaseBdev2", 00:08:32.626 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:32.626 "is_configured": true, 00:08:32.626 "data_offset": 0, 00:08:32.626 "data_size": 65536 00:08:32.626 }, 00:08:32.626 { 00:08:32.626 "name": "BaseBdev3", 00:08:32.626 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:32.626 "is_configured": true, 00:08:32.626 "data_offset": 0, 00:08:32.626 "data_size": 65536 00:08:32.626 } 00:08:32.626 ] 00:08:32.626 }' 00:08:32.627 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:32.627 02:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:32.885 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:33.143 [2024-07-25 02:33:19.861884] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.143 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:33.143 "name": "Existed_Raid", 00:08:33.143 "aliases": [ 00:08:33.143 "40d6b027-4a2e-11ef-9c8e-7947904e2597" 00:08:33.143 ], 00:08:33.143 "product_name": "Raid Volume", 00:08:33.143 "block_size": 512, 00:08:33.143 "num_blocks": 196608, 00:08:33.143 "uuid": "40d6b027-4a2e-11ef-9c8e-7947904e2597", 00:08:33.143 "assigned_rate_limits": { 00:08:33.143 "rw_ios_per_sec": 0, 00:08:33.143 "rw_mbytes_per_sec": 0, 00:08:33.143 "r_mbytes_per_sec": 0, 00:08:33.143 "w_mbytes_per_sec": 0 00:08:33.143 }, 00:08:33.143 "claimed": false, 00:08:33.143 "zoned": false, 00:08:33.143 "supported_io_types": { 00:08:33.143 "read": true, 00:08:33.143 "write": true, 00:08:33.143 "unmap": true, 00:08:33.143 "flush": true, 00:08:33.143 "reset": true, 00:08:33.143 "nvme_admin": false, 00:08:33.143 "nvme_io": false, 00:08:33.143 "nvme_io_md": false, 00:08:33.143 "write_zeroes": true, 00:08:33.143 "zcopy": false, 00:08:33.143 "get_zone_info": false, 00:08:33.143 "zone_management": false, 00:08:33.143 "zone_append": false, 00:08:33.143 "compare": false, 00:08:33.143 "compare_and_write": false, 00:08:33.143 "abort": false, 00:08:33.143 "seek_hole": false, 00:08:33.143 "seek_data": false, 00:08:33.143 "copy": false, 00:08:33.143 "nvme_iov_md": false 00:08:33.143 }, 00:08:33.143 "memory_domains": [ 00:08:33.143 { 00:08:33.143 "dma_device_id": "system", 00:08:33.143 "dma_device_type": 1 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.143 "dma_device_type": 2 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "system", 00:08:33.143 "dma_device_type": 1 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.143 "dma_device_type": 2 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "system", 00:08:33.143 "dma_device_type": 1 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.143 "dma_device_type": 2 00:08:33.143 } 00:08:33.143 ], 00:08:33.143 "driver_specific": { 00:08:33.143 "raid": { 00:08:33.143 "uuid": "40d6b027-4a2e-11ef-9c8e-7947904e2597", 00:08:33.143 "strip_size_kb": 64, 00:08:33.143 "state": "online", 00:08:33.143 "raid_level": "raid0", 00:08:33.143 "superblock": false, 00:08:33.143 "num_base_bdevs": 3, 00:08:33.143 "num_base_bdevs_discovered": 3, 00:08:33.143 "num_base_bdevs_operational": 3, 00:08:33.143 "base_bdevs_list": [ 00:08:33.143 { 00:08:33.143 "name": "NewBaseBdev", 00:08:33.143 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:33.143 "is_configured": true, 00:08:33.143 "data_offset": 0, 00:08:33.143 "data_size": 65536 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "name": "BaseBdev2", 00:08:33.143 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:33.143 "is_configured": true, 00:08:33.143 "data_offset": 0, 00:08:33.143 "data_size": 65536 00:08:33.143 }, 00:08:33.143 { 00:08:33.143 "name": "BaseBdev3", 00:08:33.143 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:33.143 "is_configured": true, 00:08:33.143 "data_offset": 0, 00:08:33.143 "data_size": 65536 00:08:33.143 } 00:08:33.143 ] 00:08:33.143 } 00:08:33.144 } 00:08:33.144 }' 00:08:33.144 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.144 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:08:33.144 BaseBdev2 00:08:33.144 BaseBdev3' 00:08:33.144 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:33.144 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:08:33.144 02:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:33.402 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:33.402 "name": "NewBaseBdev", 00:08:33.402 "aliases": [ 00:08:33.402 "3e0d82d7-4a2e-11ef-9c8e-7947904e2597" 00:08:33.402 ], 00:08:33.402 "product_name": "Malloc disk", 00:08:33.402 "block_size": 512, 00:08:33.403 "num_blocks": 65536, 00:08:33.403 "uuid": "3e0d82d7-4a2e-11ef-9c8e-7947904e2597", 00:08:33.403 "assigned_rate_limits": { 00:08:33.403 "rw_ios_per_sec": 0, 00:08:33.403 "rw_mbytes_per_sec": 0, 00:08:33.403 "r_mbytes_per_sec": 0, 00:08:33.403 "w_mbytes_per_sec": 0 00:08:33.403 }, 00:08:33.403 "claimed": true, 00:08:33.403 "claim_type": "exclusive_write", 00:08:33.403 "zoned": false, 00:08:33.403 "supported_io_types": { 00:08:33.403 "read": true, 00:08:33.403 "write": true, 00:08:33.403 "unmap": true, 00:08:33.403 "flush": true, 00:08:33.403 "reset": true, 00:08:33.403 "nvme_admin": false, 00:08:33.403 "nvme_io": false, 00:08:33.403 "nvme_io_md": false, 00:08:33.403 "write_zeroes": true, 00:08:33.403 "zcopy": true, 00:08:33.403 "get_zone_info": false, 00:08:33.403 "zone_management": false, 00:08:33.403 "zone_append": false, 00:08:33.403 "compare": false, 00:08:33.403 "compare_and_write": false, 00:08:33.403 "abort": true, 00:08:33.403 "seek_hole": false, 00:08:33.403 "seek_data": false, 00:08:33.403 "copy": true, 00:08:33.403 "nvme_iov_md": false 00:08:33.403 }, 00:08:33.403 "memory_domains": [ 00:08:33.403 { 00:08:33.403 "dma_device_id": "system", 00:08:33.403 "dma_device_type": 1 00:08:33.403 }, 00:08:33.403 { 00:08:33.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.403 "dma_device_type": 2 00:08:33.403 } 00:08:33.403 ], 00:08:33.403 "driver_specific": {} 00:08:33.403 }' 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:33.403 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:33.678 "name": "BaseBdev2", 00:08:33.678 "aliases": [ 00:08:33.678 "3c8fefa3-4a2e-11ef-9c8e-7947904e2597" 00:08:33.678 ], 00:08:33.678 "product_name": "Malloc disk", 00:08:33.678 "block_size": 512, 00:08:33.678 "num_blocks": 65536, 00:08:33.678 "uuid": "3c8fefa3-4a2e-11ef-9c8e-7947904e2597", 00:08:33.678 "assigned_rate_limits": { 00:08:33.678 "rw_ios_per_sec": 0, 00:08:33.678 "rw_mbytes_per_sec": 0, 00:08:33.678 "r_mbytes_per_sec": 0, 00:08:33.678 "w_mbytes_per_sec": 0 00:08:33.678 }, 00:08:33.678 "claimed": true, 00:08:33.678 "claim_type": "exclusive_write", 00:08:33.678 "zoned": false, 00:08:33.678 "supported_io_types": { 00:08:33.678 "read": true, 00:08:33.678 "write": true, 00:08:33.678 "unmap": true, 00:08:33.678 "flush": true, 00:08:33.678 "reset": true, 00:08:33.678 "nvme_admin": false, 00:08:33.678 "nvme_io": false, 00:08:33.678 "nvme_io_md": false, 00:08:33.678 "write_zeroes": true, 00:08:33.678 "zcopy": true, 00:08:33.678 "get_zone_info": false, 00:08:33.678 "zone_management": false, 00:08:33.678 "zone_append": false, 00:08:33.678 "compare": false, 00:08:33.678 "compare_and_write": false, 00:08:33.678 "abort": true, 00:08:33.678 "seek_hole": false, 00:08:33.678 "seek_data": false, 00:08:33.678 "copy": true, 00:08:33.678 "nvme_iov_md": false 00:08:33.678 }, 00:08:33.678 "memory_domains": [ 00:08:33.678 { 00:08:33.678 "dma_device_id": "system", 00:08:33.678 "dma_device_type": 1 00:08:33.678 }, 00:08:33.678 { 00:08:33.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.678 "dma_device_type": 2 00:08:33.678 } 00:08:33.678 ], 00:08:33.678 "driver_specific": {} 00:08:33.678 }' 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:33.678 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:33.942 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:33.942 "name": "BaseBdev3", 00:08:33.942 "aliases": [ 00:08:33.942 "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597" 00:08:33.942 ], 00:08:33.942 "product_name": "Malloc disk", 00:08:33.942 "block_size": 512, 00:08:33.942 "num_blocks": 65536, 00:08:33.942 "uuid": "3ce4ccdb-4a2e-11ef-9c8e-7947904e2597", 00:08:33.942 "assigned_rate_limits": { 00:08:33.942 "rw_ios_per_sec": 0, 00:08:33.942 "rw_mbytes_per_sec": 0, 00:08:33.942 "r_mbytes_per_sec": 0, 00:08:33.942 "w_mbytes_per_sec": 0 00:08:33.942 }, 00:08:33.942 "claimed": true, 00:08:33.942 "claim_type": "exclusive_write", 00:08:33.942 "zoned": false, 00:08:33.942 "supported_io_types": { 00:08:33.942 "read": true, 00:08:33.942 "write": true, 00:08:33.942 "unmap": true, 00:08:33.942 "flush": true, 00:08:33.942 "reset": true, 00:08:33.942 "nvme_admin": false, 00:08:33.943 "nvme_io": false, 00:08:33.943 "nvme_io_md": false, 00:08:33.943 "write_zeroes": true, 00:08:33.943 "zcopy": true, 00:08:33.943 "get_zone_info": false, 00:08:33.943 "zone_management": false, 00:08:33.943 "zone_append": false, 00:08:33.943 "compare": false, 00:08:33.943 "compare_and_write": false, 00:08:33.943 "abort": true, 00:08:33.943 "seek_hole": false, 00:08:33.943 "seek_data": false, 00:08:33.943 "copy": true, 00:08:33.943 "nvme_iov_md": false 00:08:33.943 }, 00:08:33.943 "memory_domains": [ 00:08:33.943 { 00:08:33.943 "dma_device_id": "system", 00:08:33.943 "dma_device_type": 1 00:08:33.943 }, 00:08:33.943 { 00:08:33.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.943 "dma_device_type": 2 00:08:33.943 } 00:08:33.943 ], 00:08:33.943 "driver_specific": {} 00:08:33.943 }' 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:33.943 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:34.202 [2024-07-25 02:33:20.890106] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.202 [2024-07-25 02:33:20.890119] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.202 [2024-07-25 02:33:20.890131] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.202 [2024-07-25 02:33:20.890139] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.202 [2024-07-25 02:33:20.890143] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209a58034a00 name Existed_Raid, state offline 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51866 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51866 ']' 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51866 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51866 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:34.202 killing process with pid 51866 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51866' 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51866 00:08:34.202 [2024-07-25 02:33:20.919854] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.202 02:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51866 00:08:34.202 [2024-07-25 02:33:20.933371] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.202 02:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:34.202 00:08:34.202 real 0m17.967s 00:08:34.202 user 0m32.096s 00:08:34.202 sys 0m3.208s 00:08:34.203 02:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.203 02:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.203 ************************************ 00:08:34.203 END TEST raid_state_function_test 00:08:34.203 ************************************ 00:08:34.462 02:33:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:34.462 02:33:21 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:34.462 02:33:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:34.462 02:33:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.462 02:33:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.462 ************************************ 00:08:34.462 START TEST raid_state_function_test_sb 00:08:34.462 ************************************ 00:08:34.462 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:08:34.462 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52571 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52571' 00:08:34.463 Process raid pid: 52571 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52571 /var/tmp/spdk-raid.sock 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52571 ']' 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.463 02:33:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.463 [2024-07-25 02:33:21.181029] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:08:34.463 [2024-07-25 02:33:21.181284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:34.722 EAL: TSC is not safe to use in SMP mode 00:08:34.722 EAL: TSC is not invariant 00:08:34.722 [2024-07-25 02:33:21.604449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.982 [2024-07-25 02:33:21.697289] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:34.982 [2024-07-25 02:33:21.698999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.982 [2024-07-25 02:33:21.699578] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.982 [2024-07-25 02:33:21.699589] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.241 02:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.241 02:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:35.241 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:35.500 [2024-07-25 02:33:22.226634] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.500 [2024-07-25 02:33:22.226670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.500 [2024-07-25 02:33:22.226690] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.500 [2024-07-25 02:33:22.226696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.500 [2024-07-25 02:33:22.226698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.500 [2024-07-25 02:33:22.226704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.500 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.775 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:35.775 "name": "Existed_Raid", 00:08:35.775 "uuid": "42d959c0-4a2e-11ef-9c8e-7947904e2597", 00:08:35.775 "strip_size_kb": 64, 00:08:35.775 "state": "configuring", 00:08:35.775 "raid_level": "raid0", 00:08:35.775 "superblock": true, 00:08:35.775 "num_base_bdevs": 3, 00:08:35.775 "num_base_bdevs_discovered": 0, 00:08:35.775 "num_base_bdevs_operational": 3, 00:08:35.775 "base_bdevs_list": [ 00:08:35.775 { 00:08:35.775 "name": "BaseBdev1", 00:08:35.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.775 "is_configured": false, 00:08:35.775 "data_offset": 0, 00:08:35.775 "data_size": 0 00:08:35.775 }, 00:08:35.775 { 00:08:35.775 "name": "BaseBdev2", 00:08:35.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.775 "is_configured": false, 00:08:35.775 "data_offset": 0, 00:08:35.775 "data_size": 0 00:08:35.775 }, 00:08:35.775 { 00:08:35.775 "name": "BaseBdev3", 00:08:35.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.775 "is_configured": false, 00:08:35.775 "data_offset": 0, 00:08:35.775 "data_size": 0 00:08:35.775 } 00:08:35.775 ] 00:08:35.775 }' 00:08:35.775 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:35.775 02:33:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.056 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:36.056 [2024-07-25 02:33:22.882762] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.056 [2024-07-25 02:33:22.882784] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279e2f434500 name Existed_Raid, state configuring 00:08:36.056 02:33:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:36.315 [2024-07-25 02:33:23.070804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.315 [2024-07-25 02:33:23.070855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.315 [2024-07-25 02:33:23.070859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.315 [2024-07-25 02:33:23.070865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.315 [2024-07-25 02:33:23.070867] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.315 [2024-07-25 02:33:23.070873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.315 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.577 [2024-07-25 02:33:23.255604] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.578 BaseBdev1 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:36.578 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.838 [ 00:08:36.838 { 00:08:36.838 "name": "BaseBdev1", 00:08:36.838 "aliases": [ 00:08:36.838 "43763e71-4a2e-11ef-9c8e-7947904e2597" 00:08:36.838 ], 00:08:36.838 "product_name": "Malloc disk", 00:08:36.838 "block_size": 512, 00:08:36.838 "num_blocks": 65536, 00:08:36.838 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:36.838 "assigned_rate_limits": { 00:08:36.838 "rw_ios_per_sec": 0, 00:08:36.838 "rw_mbytes_per_sec": 0, 00:08:36.838 "r_mbytes_per_sec": 0, 00:08:36.838 "w_mbytes_per_sec": 0 00:08:36.838 }, 00:08:36.838 "claimed": true, 00:08:36.838 "claim_type": "exclusive_write", 00:08:36.838 "zoned": false, 00:08:36.838 "supported_io_types": { 00:08:36.838 "read": true, 00:08:36.838 "write": true, 00:08:36.838 "unmap": true, 00:08:36.838 "flush": true, 00:08:36.838 "reset": true, 00:08:36.838 "nvme_admin": false, 00:08:36.838 "nvme_io": false, 00:08:36.838 "nvme_io_md": false, 00:08:36.838 "write_zeroes": true, 00:08:36.838 "zcopy": true, 00:08:36.838 "get_zone_info": false, 00:08:36.838 "zone_management": false, 00:08:36.838 "zone_append": false, 00:08:36.838 "compare": false, 00:08:36.838 "compare_and_write": false, 00:08:36.838 "abort": true, 00:08:36.838 "seek_hole": false, 00:08:36.838 "seek_data": false, 00:08:36.838 "copy": true, 00:08:36.838 "nvme_iov_md": false 00:08:36.838 }, 00:08:36.838 "memory_domains": [ 00:08:36.838 { 00:08:36.838 "dma_device_id": "system", 00:08:36.838 "dma_device_type": 1 00:08:36.838 }, 00:08:36.838 { 00:08:36.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.838 "dma_device_type": 2 00:08:36.838 } 00:08:36.838 ], 00:08:36.838 "driver_specific": {} 00:08:36.838 } 00:08:36.838 ] 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.838 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.097 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:37.097 "name": "Existed_Raid", 00:08:37.097 "uuid": "435a292c-4a2e-11ef-9c8e-7947904e2597", 00:08:37.097 "strip_size_kb": 64, 00:08:37.097 "state": "configuring", 00:08:37.097 "raid_level": "raid0", 00:08:37.097 "superblock": true, 00:08:37.097 "num_base_bdevs": 3, 00:08:37.097 "num_base_bdevs_discovered": 1, 00:08:37.097 "num_base_bdevs_operational": 3, 00:08:37.097 "base_bdevs_list": [ 00:08:37.097 { 00:08:37.097 "name": "BaseBdev1", 00:08:37.097 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:37.097 "is_configured": true, 00:08:37.097 "data_offset": 2048, 00:08:37.097 "data_size": 63488 00:08:37.097 }, 00:08:37.097 { 00:08:37.097 "name": "BaseBdev2", 00:08:37.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.097 "is_configured": false, 00:08:37.097 "data_offset": 0, 00:08:37.097 "data_size": 0 00:08:37.097 }, 00:08:37.097 { 00:08:37.097 "name": "BaseBdev3", 00:08:37.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.097 "is_configured": false, 00:08:37.097 "data_offset": 0, 00:08:37.097 "data_size": 0 00:08:37.097 } 00:08:37.097 ] 00:08:37.097 }' 00:08:37.097 02:33:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:37.097 02:33:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.356 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:37.615 [2024-07-25 02:33:24.275052] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.615 [2024-07-25 02:33:24.275073] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279e2f434500 name Existed_Raid, state configuring 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:37.615 [2024-07-25 02:33:24.459101] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.615 [2024-07-25 02:33:24.459693] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.615 [2024-07-25 02:33:24.459726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.615 [2024-07-25 02:33:24.459729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.615 [2024-07-25 02:33:24.459735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.615 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.873 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:37.873 "name": "Existed_Raid", 00:08:37.873 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:37.873 "strip_size_kb": 64, 00:08:37.873 "state": "configuring", 00:08:37.873 "raid_level": "raid0", 00:08:37.873 "superblock": true, 00:08:37.873 "num_base_bdevs": 3, 00:08:37.873 "num_base_bdevs_discovered": 1, 00:08:37.873 "num_base_bdevs_operational": 3, 00:08:37.873 "base_bdevs_list": [ 00:08:37.873 { 00:08:37.873 "name": "BaseBdev1", 00:08:37.873 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:37.874 "is_configured": true, 00:08:37.874 "data_offset": 2048, 00:08:37.874 "data_size": 63488 00:08:37.874 }, 00:08:37.874 { 00:08:37.874 "name": "BaseBdev2", 00:08:37.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.874 "is_configured": false, 00:08:37.874 "data_offset": 0, 00:08:37.874 "data_size": 0 00:08:37.874 }, 00:08:37.874 { 00:08:37.874 "name": "BaseBdev3", 00:08:37.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.874 "is_configured": false, 00:08:37.874 "data_offset": 0, 00:08:37.874 "data_size": 0 00:08:37.874 } 00:08:37.874 ] 00:08:37.874 }' 00:08:37.874 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:37.874 02:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.132 02:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.391 [2024-07-25 02:33:25.107334] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.391 BaseBdev2 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:38.391 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.650 [ 00:08:38.650 { 00:08:38.650 "name": "BaseBdev2", 00:08:38.650 "aliases": [ 00:08:38.650 "4490e5a6-4a2e-11ef-9c8e-7947904e2597" 00:08:38.650 ], 00:08:38.650 "product_name": "Malloc disk", 00:08:38.650 "block_size": 512, 00:08:38.650 "num_blocks": 65536, 00:08:38.650 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:38.650 "assigned_rate_limits": { 00:08:38.650 "rw_ios_per_sec": 0, 00:08:38.650 "rw_mbytes_per_sec": 0, 00:08:38.650 "r_mbytes_per_sec": 0, 00:08:38.650 "w_mbytes_per_sec": 0 00:08:38.650 }, 00:08:38.650 "claimed": true, 00:08:38.650 "claim_type": "exclusive_write", 00:08:38.650 "zoned": false, 00:08:38.650 "supported_io_types": { 00:08:38.650 "read": true, 00:08:38.650 "write": true, 00:08:38.650 "unmap": true, 00:08:38.650 "flush": true, 00:08:38.650 "reset": true, 00:08:38.650 "nvme_admin": false, 00:08:38.650 "nvme_io": false, 00:08:38.650 "nvme_io_md": false, 00:08:38.650 "write_zeroes": true, 00:08:38.650 "zcopy": true, 00:08:38.650 "get_zone_info": false, 00:08:38.650 "zone_management": false, 00:08:38.650 "zone_append": false, 00:08:38.650 "compare": false, 00:08:38.650 "compare_and_write": false, 00:08:38.650 "abort": true, 00:08:38.650 "seek_hole": false, 00:08:38.650 "seek_data": false, 00:08:38.650 "copy": true, 00:08:38.650 "nvme_iov_md": false 00:08:38.650 }, 00:08:38.650 "memory_domains": [ 00:08:38.650 { 00:08:38.650 "dma_device_id": "system", 00:08:38.650 "dma_device_type": 1 00:08:38.650 }, 00:08:38.650 { 00:08:38.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.650 "dma_device_type": 2 00:08:38.650 } 00:08:38.650 ], 00:08:38.650 "driver_specific": {} 00:08:38.650 } 00:08:38.650 ] 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.650 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.909 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:38.910 "name": "Existed_Raid", 00:08:38.910 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:38.910 "strip_size_kb": 64, 00:08:38.910 "state": "configuring", 00:08:38.910 "raid_level": "raid0", 00:08:38.910 "superblock": true, 00:08:38.910 "num_base_bdevs": 3, 00:08:38.910 "num_base_bdevs_discovered": 2, 00:08:38.910 "num_base_bdevs_operational": 3, 00:08:38.910 "base_bdevs_list": [ 00:08:38.910 { 00:08:38.910 "name": "BaseBdev1", 00:08:38.910 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:38.910 "is_configured": true, 00:08:38.910 "data_offset": 2048, 00:08:38.910 "data_size": 63488 00:08:38.910 }, 00:08:38.910 { 00:08:38.910 "name": "BaseBdev2", 00:08:38.910 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:38.910 "is_configured": true, 00:08:38.910 "data_offset": 2048, 00:08:38.910 "data_size": 63488 00:08:38.910 }, 00:08:38.910 { 00:08:38.910 "name": "BaseBdev3", 00:08:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.910 "is_configured": false, 00:08:38.910 "data_offset": 0, 00:08:38.910 "data_size": 0 00:08:38.910 } 00:08:38.910 ] 00:08:38.910 }' 00:08:38.910 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:38.910 02:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 02:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.428 [2024-07-25 02:33:26.131535] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.428 [2024-07-25 02:33:26.131581] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x279e2f434a00 00:08:39.428 [2024-07-25 02:33:26.131586] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.428 [2024-07-25 02:33:26.131602] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x279e2f497e20 00:08:39.428 [2024-07-25 02:33:26.131634] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x279e2f434a00 00:08:39.428 [2024-07-25 02:33:26.131637] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x279e2f434a00 00:08:39.428 [2024-07-25 02:33:26.131652] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.428 BaseBdev3 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:39.428 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.688 [ 00:08:39.688 { 00:08:39.688 "name": "BaseBdev3", 00:08:39.688 "aliases": [ 00:08:39.688 "452d2e18-4a2e-11ef-9c8e-7947904e2597" 00:08:39.688 ], 00:08:39.688 "product_name": "Malloc disk", 00:08:39.688 "block_size": 512, 00:08:39.688 "num_blocks": 65536, 00:08:39.688 "uuid": "452d2e18-4a2e-11ef-9c8e-7947904e2597", 00:08:39.688 "assigned_rate_limits": { 00:08:39.688 "rw_ios_per_sec": 0, 00:08:39.688 "rw_mbytes_per_sec": 0, 00:08:39.688 "r_mbytes_per_sec": 0, 00:08:39.688 "w_mbytes_per_sec": 0 00:08:39.688 }, 00:08:39.688 "claimed": true, 00:08:39.688 "claim_type": "exclusive_write", 00:08:39.688 "zoned": false, 00:08:39.688 "supported_io_types": { 00:08:39.688 "read": true, 00:08:39.688 "write": true, 00:08:39.688 "unmap": true, 00:08:39.688 "flush": true, 00:08:39.688 "reset": true, 00:08:39.688 "nvme_admin": false, 00:08:39.688 "nvme_io": false, 00:08:39.688 "nvme_io_md": false, 00:08:39.688 "write_zeroes": true, 00:08:39.688 "zcopy": true, 00:08:39.688 "get_zone_info": false, 00:08:39.688 "zone_management": false, 00:08:39.688 "zone_append": false, 00:08:39.688 "compare": false, 00:08:39.688 "compare_and_write": false, 00:08:39.688 "abort": true, 00:08:39.688 "seek_hole": false, 00:08:39.688 "seek_data": false, 00:08:39.688 "copy": true, 00:08:39.688 "nvme_iov_md": false 00:08:39.688 }, 00:08:39.688 "memory_domains": [ 00:08:39.688 { 00:08:39.688 "dma_device_id": "system", 00:08:39.688 "dma_device_type": 1 00:08:39.688 }, 00:08:39.688 { 00:08:39.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.688 "dma_device_type": 2 00:08:39.688 } 00:08:39.688 ], 00:08:39.688 "driver_specific": {} 00:08:39.688 } 00:08:39.688 ] 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.688 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.948 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:39.948 "name": "Existed_Raid", 00:08:39.948 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:39.948 "strip_size_kb": 64, 00:08:39.948 "state": "online", 00:08:39.948 "raid_level": "raid0", 00:08:39.948 "superblock": true, 00:08:39.948 "num_base_bdevs": 3, 00:08:39.948 "num_base_bdevs_discovered": 3, 00:08:39.948 "num_base_bdevs_operational": 3, 00:08:39.948 "base_bdevs_list": [ 00:08:39.948 { 00:08:39.948 "name": "BaseBdev1", 00:08:39.948 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:39.948 "is_configured": true, 00:08:39.948 "data_offset": 2048, 00:08:39.948 "data_size": 63488 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "name": "BaseBdev2", 00:08:39.948 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:39.948 "is_configured": true, 00:08:39.948 "data_offset": 2048, 00:08:39.948 "data_size": 63488 00:08:39.948 }, 00:08:39.948 { 00:08:39.948 "name": "BaseBdev3", 00:08:39.948 "uuid": "452d2e18-4a2e-11ef-9c8e-7947904e2597", 00:08:39.948 "is_configured": true, 00:08:39.948 "data_offset": 2048, 00:08:39.948 "data_size": 63488 00:08:39.948 } 00:08:39.948 ] 00:08:39.948 }' 00:08:39.948 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:39.948 02:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:40.207 02:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:40.468 [2024-07-25 02:33:27.131685] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:40.468 "name": "Existed_Raid", 00:08:40.468 "aliases": [ 00:08:40.468 "442dff86-4a2e-11ef-9c8e-7947904e2597" 00:08:40.468 ], 00:08:40.468 "product_name": "Raid Volume", 00:08:40.468 "block_size": 512, 00:08:40.468 "num_blocks": 190464, 00:08:40.468 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "assigned_rate_limits": { 00:08:40.468 "rw_ios_per_sec": 0, 00:08:40.468 "rw_mbytes_per_sec": 0, 00:08:40.468 "r_mbytes_per_sec": 0, 00:08:40.468 "w_mbytes_per_sec": 0 00:08:40.468 }, 00:08:40.468 "claimed": false, 00:08:40.468 "zoned": false, 00:08:40.468 "supported_io_types": { 00:08:40.468 "read": true, 00:08:40.468 "write": true, 00:08:40.468 "unmap": true, 00:08:40.468 "flush": true, 00:08:40.468 "reset": true, 00:08:40.468 "nvme_admin": false, 00:08:40.468 "nvme_io": false, 00:08:40.468 "nvme_io_md": false, 00:08:40.468 "write_zeroes": true, 00:08:40.468 "zcopy": false, 00:08:40.468 "get_zone_info": false, 00:08:40.468 "zone_management": false, 00:08:40.468 "zone_append": false, 00:08:40.468 "compare": false, 00:08:40.468 "compare_and_write": false, 00:08:40.468 "abort": false, 00:08:40.468 "seek_hole": false, 00:08:40.468 "seek_data": false, 00:08:40.468 "copy": false, 00:08:40.468 "nvme_iov_md": false 00:08:40.468 }, 00:08:40.468 "memory_domains": [ 00:08:40.468 { 00:08:40.468 "dma_device_id": "system", 00:08:40.468 "dma_device_type": 1 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.468 "dma_device_type": 2 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "system", 00:08:40.468 "dma_device_type": 1 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.468 "dma_device_type": 2 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "system", 00:08:40.468 "dma_device_type": 1 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.468 "dma_device_type": 2 00:08:40.468 } 00:08:40.468 ], 00:08:40.468 "driver_specific": { 00:08:40.468 "raid": { 00:08:40.468 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "strip_size_kb": 64, 00:08:40.468 "state": "online", 00:08:40.468 "raid_level": "raid0", 00:08:40.468 "superblock": true, 00:08:40.468 "num_base_bdevs": 3, 00:08:40.468 "num_base_bdevs_discovered": 3, 00:08:40.468 "num_base_bdevs_operational": 3, 00:08:40.468 "base_bdevs_list": [ 00:08:40.468 { 00:08:40.468 "name": "BaseBdev1", 00:08:40.468 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "is_configured": true, 00:08:40.468 "data_offset": 2048, 00:08:40.468 "data_size": 63488 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "name": "BaseBdev2", 00:08:40.468 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "is_configured": true, 00:08:40.468 "data_offset": 2048, 00:08:40.468 "data_size": 63488 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "name": "BaseBdev3", 00:08:40.468 "uuid": "452d2e18-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "is_configured": true, 00:08:40.468 "data_offset": 2048, 00:08:40.468 "data_size": 63488 00:08:40.468 } 00:08:40.468 ] 00:08:40.468 } 00:08:40.468 } 00:08:40.468 }' 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:40.468 BaseBdev2 00:08:40.468 BaseBdev3' 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:40.468 "name": "BaseBdev1", 00:08:40.468 "aliases": [ 00:08:40.468 "43763e71-4a2e-11ef-9c8e-7947904e2597" 00:08:40.468 ], 00:08:40.468 "product_name": "Malloc disk", 00:08:40.468 "block_size": 512, 00:08:40.468 "num_blocks": 65536, 00:08:40.468 "uuid": "43763e71-4a2e-11ef-9c8e-7947904e2597", 00:08:40.468 "assigned_rate_limits": { 00:08:40.468 "rw_ios_per_sec": 0, 00:08:40.468 "rw_mbytes_per_sec": 0, 00:08:40.468 "r_mbytes_per_sec": 0, 00:08:40.468 "w_mbytes_per_sec": 0 00:08:40.468 }, 00:08:40.468 "claimed": true, 00:08:40.468 "claim_type": "exclusive_write", 00:08:40.468 "zoned": false, 00:08:40.468 "supported_io_types": { 00:08:40.468 "read": true, 00:08:40.468 "write": true, 00:08:40.468 "unmap": true, 00:08:40.468 "flush": true, 00:08:40.468 "reset": true, 00:08:40.468 "nvme_admin": false, 00:08:40.468 "nvme_io": false, 00:08:40.468 "nvme_io_md": false, 00:08:40.468 "write_zeroes": true, 00:08:40.468 "zcopy": true, 00:08:40.468 "get_zone_info": false, 00:08:40.468 "zone_management": false, 00:08:40.468 "zone_append": false, 00:08:40.468 "compare": false, 00:08:40.468 "compare_and_write": false, 00:08:40.468 "abort": true, 00:08:40.468 "seek_hole": false, 00:08:40.468 "seek_data": false, 00:08:40.468 "copy": true, 00:08:40.468 "nvme_iov_md": false 00:08:40.468 }, 00:08:40.468 "memory_domains": [ 00:08:40.468 { 00:08:40.468 "dma_device_id": "system", 00:08:40.468 "dma_device_type": 1 00:08:40.468 }, 00:08:40.468 { 00:08:40.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.468 "dma_device_type": 2 00:08:40.468 } 00:08:40.468 ], 00:08:40.468 "driver_specific": {} 00:08:40.468 }' 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.468 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:40.728 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:40.988 "name": "BaseBdev2", 00:08:40.988 "aliases": [ 00:08:40.988 "4490e5a6-4a2e-11ef-9c8e-7947904e2597" 00:08:40.988 ], 00:08:40.988 "product_name": "Malloc disk", 00:08:40.988 "block_size": 512, 00:08:40.988 "num_blocks": 65536, 00:08:40.988 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:40.988 "assigned_rate_limits": { 00:08:40.988 "rw_ios_per_sec": 0, 00:08:40.988 "rw_mbytes_per_sec": 0, 00:08:40.988 "r_mbytes_per_sec": 0, 00:08:40.988 "w_mbytes_per_sec": 0 00:08:40.988 }, 00:08:40.988 "claimed": true, 00:08:40.988 "claim_type": "exclusive_write", 00:08:40.988 "zoned": false, 00:08:40.988 "supported_io_types": { 00:08:40.988 "read": true, 00:08:40.988 "write": true, 00:08:40.988 "unmap": true, 00:08:40.988 "flush": true, 00:08:40.988 "reset": true, 00:08:40.988 "nvme_admin": false, 00:08:40.988 "nvme_io": false, 00:08:40.988 "nvme_io_md": false, 00:08:40.988 "write_zeroes": true, 00:08:40.988 "zcopy": true, 00:08:40.988 "get_zone_info": false, 00:08:40.988 "zone_management": false, 00:08:40.988 "zone_append": false, 00:08:40.988 "compare": false, 00:08:40.988 "compare_and_write": false, 00:08:40.988 "abort": true, 00:08:40.988 "seek_hole": false, 00:08:40.988 "seek_data": false, 00:08:40.988 "copy": true, 00:08:40.988 "nvme_iov_md": false 00:08:40.988 }, 00:08:40.988 "memory_domains": [ 00:08:40.988 { 00:08:40.988 "dma_device_id": "system", 00:08:40.988 "dma_device_type": 1 00:08:40.988 }, 00:08:40.988 { 00:08:40.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.988 "dma_device_type": 2 00:08:40.988 } 00:08:40.988 ], 00:08:40.988 "driver_specific": {} 00:08:40.988 }' 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:40.988 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:41.248 "name": "BaseBdev3", 00:08:41.248 "aliases": [ 00:08:41.248 "452d2e18-4a2e-11ef-9c8e-7947904e2597" 00:08:41.248 ], 00:08:41.248 "product_name": "Malloc disk", 00:08:41.248 "block_size": 512, 00:08:41.248 "num_blocks": 65536, 00:08:41.248 "uuid": "452d2e18-4a2e-11ef-9c8e-7947904e2597", 00:08:41.248 "assigned_rate_limits": { 00:08:41.248 "rw_ios_per_sec": 0, 00:08:41.248 "rw_mbytes_per_sec": 0, 00:08:41.248 "r_mbytes_per_sec": 0, 00:08:41.248 "w_mbytes_per_sec": 0 00:08:41.248 }, 00:08:41.248 "claimed": true, 00:08:41.248 "claim_type": "exclusive_write", 00:08:41.248 "zoned": false, 00:08:41.248 "supported_io_types": { 00:08:41.248 "read": true, 00:08:41.248 "write": true, 00:08:41.248 "unmap": true, 00:08:41.248 "flush": true, 00:08:41.248 "reset": true, 00:08:41.248 "nvme_admin": false, 00:08:41.248 "nvme_io": false, 00:08:41.248 "nvme_io_md": false, 00:08:41.248 "write_zeroes": true, 00:08:41.248 "zcopy": true, 00:08:41.248 "get_zone_info": false, 00:08:41.248 "zone_management": false, 00:08:41.248 "zone_append": false, 00:08:41.248 "compare": false, 00:08:41.248 "compare_and_write": false, 00:08:41.248 "abort": true, 00:08:41.248 "seek_hole": false, 00:08:41.248 "seek_data": false, 00:08:41.248 "copy": true, 00:08:41.248 "nvme_iov_md": false 00:08:41.248 }, 00:08:41.248 "memory_domains": [ 00:08:41.248 { 00:08:41.248 "dma_device_id": "system", 00:08:41.248 "dma_device_type": 1 00:08:41.248 }, 00:08:41.248 { 00:08:41.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.248 "dma_device_type": 2 00:08:41.248 } 00:08:41.248 ], 00:08:41.248 "driver_specific": {} 00:08:41.248 }' 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:41.248 02:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:41.248 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:41.248 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:41.508 [2024-07-25 02:33:28.175886] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.508 [2024-07-25 02:33:28.175904] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.508 [2024-07-25 02:33:28.175914] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.508 "name": "Existed_Raid", 00:08:41.508 "uuid": "442dff86-4a2e-11ef-9c8e-7947904e2597", 00:08:41.508 "strip_size_kb": 64, 00:08:41.508 "state": "offline", 00:08:41.508 "raid_level": "raid0", 00:08:41.508 "superblock": true, 00:08:41.508 "num_base_bdevs": 3, 00:08:41.508 "num_base_bdevs_discovered": 2, 00:08:41.508 "num_base_bdevs_operational": 2, 00:08:41.508 "base_bdevs_list": [ 00:08:41.508 { 00:08:41.508 "name": null, 00:08:41.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.508 "is_configured": false, 00:08:41.508 "data_offset": 2048, 00:08:41.508 "data_size": 63488 00:08:41.508 }, 00:08:41.508 { 00:08:41.508 "name": "BaseBdev2", 00:08:41.508 "uuid": "4490e5a6-4a2e-11ef-9c8e-7947904e2597", 00:08:41.508 "is_configured": true, 00:08:41.508 "data_offset": 2048, 00:08:41.508 "data_size": 63488 00:08:41.508 }, 00:08:41.508 { 00:08:41.508 "name": "BaseBdev3", 00:08:41.508 "uuid": "452d2e18-4a2e-11ef-9c8e-7947904e2597", 00:08:41.508 "is_configured": true, 00:08:41.508 "data_offset": 2048, 00:08:41.508 "data_size": 63488 00:08:41.508 } 00:08:41.508 ] 00:08:41.508 }' 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.508 02:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.767 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:41.767 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:41.767 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.767 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:42.026 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:42.026 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.026 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:42.285 [2024-07-25 02:33:28.972688] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:42.285 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:42.285 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:42.285 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.285 02:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:42.285 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:42.285 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:42.285 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:42.545 [2024-07-25 02:33:29.325572] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.545 [2024-07-25 02:33:29.325592] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279e2f434a00 name Existed_Raid, state offline 00:08:42.545 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:42.545 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:42.545 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.545 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:42.804 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.064 BaseBdev2 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:43.064 02:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.323 [ 00:08:43.323 { 00:08:43.323 "name": "BaseBdev2", 00:08:43.323 "aliases": [ 00:08:43.323 "474eadcb-4a2e-11ef-9c8e-7947904e2597" 00:08:43.323 ], 00:08:43.323 "product_name": "Malloc disk", 00:08:43.323 "block_size": 512, 00:08:43.323 "num_blocks": 65536, 00:08:43.323 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:43.323 "assigned_rate_limits": { 00:08:43.323 "rw_ios_per_sec": 0, 00:08:43.323 "rw_mbytes_per_sec": 0, 00:08:43.323 "r_mbytes_per_sec": 0, 00:08:43.323 "w_mbytes_per_sec": 0 00:08:43.323 }, 00:08:43.323 "claimed": false, 00:08:43.323 "zoned": false, 00:08:43.323 "supported_io_types": { 00:08:43.323 "read": true, 00:08:43.323 "write": true, 00:08:43.323 "unmap": true, 00:08:43.323 "flush": true, 00:08:43.323 "reset": true, 00:08:43.323 "nvme_admin": false, 00:08:43.323 "nvme_io": false, 00:08:43.323 "nvme_io_md": false, 00:08:43.323 "write_zeroes": true, 00:08:43.323 "zcopy": true, 00:08:43.323 "get_zone_info": false, 00:08:43.323 "zone_management": false, 00:08:43.323 "zone_append": false, 00:08:43.323 "compare": false, 00:08:43.323 "compare_and_write": false, 00:08:43.323 "abort": true, 00:08:43.323 "seek_hole": false, 00:08:43.323 "seek_data": false, 00:08:43.323 "copy": true, 00:08:43.323 "nvme_iov_md": false 00:08:43.323 }, 00:08:43.323 "memory_domains": [ 00:08:43.323 { 00:08:43.323 "dma_device_id": "system", 00:08:43.323 "dma_device_type": 1 00:08:43.323 }, 00:08:43.323 { 00:08:43.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.323 "dma_device_type": 2 00:08:43.323 } 00:08:43.323 ], 00:08:43.323 "driver_specific": {} 00:08:43.323 } 00:08:43.323 ] 00:08:43.323 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:43.323 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:43.323 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:43.323 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.583 BaseBdev3 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:43.583 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.843 [ 00:08:43.843 { 00:08:43.843 "name": "BaseBdev3", 00:08:43.843 "aliases": [ 00:08:43.843 "47a4c9c2-4a2e-11ef-9c8e-7947904e2597" 00:08:43.843 ], 00:08:43.843 "product_name": "Malloc disk", 00:08:43.843 "block_size": 512, 00:08:43.843 "num_blocks": 65536, 00:08:43.843 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:43.843 "assigned_rate_limits": { 00:08:43.843 "rw_ios_per_sec": 0, 00:08:43.843 "rw_mbytes_per_sec": 0, 00:08:43.843 "r_mbytes_per_sec": 0, 00:08:43.843 "w_mbytes_per_sec": 0 00:08:43.843 }, 00:08:43.843 "claimed": false, 00:08:43.843 "zoned": false, 00:08:43.843 "supported_io_types": { 00:08:43.843 "read": true, 00:08:43.843 "write": true, 00:08:43.843 "unmap": true, 00:08:43.843 "flush": true, 00:08:43.843 "reset": true, 00:08:43.843 "nvme_admin": false, 00:08:43.843 "nvme_io": false, 00:08:43.843 "nvme_io_md": false, 00:08:43.843 "write_zeroes": true, 00:08:43.843 "zcopy": true, 00:08:43.843 "get_zone_info": false, 00:08:43.843 "zone_management": false, 00:08:43.843 "zone_append": false, 00:08:43.843 "compare": false, 00:08:43.843 "compare_and_write": false, 00:08:43.843 "abort": true, 00:08:43.843 "seek_hole": false, 00:08:43.843 "seek_data": false, 00:08:43.843 "copy": true, 00:08:43.843 "nvme_iov_md": false 00:08:43.843 }, 00:08:43.843 "memory_domains": [ 00:08:43.843 { 00:08:43.843 "dma_device_id": "system", 00:08:43.843 "dma_device_type": 1 00:08:43.843 }, 00:08:43.843 { 00:08:43.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.843 "dma_device_type": 2 00:08:43.843 } 00:08:43.843 ], 00:08:43.843 "driver_specific": {} 00:08:43.843 } 00:08:43.843 ] 00:08:43.843 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:43.843 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:43.843 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:43.843 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:44.110 [2024-07-25 02:33:30.795060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.110 [2024-07-25 02:33:30.795098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.110 [2024-07-25 02:33:30.795119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.110 [2024-07-25 02:33:30.795498] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.110 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.110 "name": "Existed_Raid", 00:08:44.110 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:44.110 "strip_size_kb": 64, 00:08:44.110 "state": "configuring", 00:08:44.110 "raid_level": "raid0", 00:08:44.110 "superblock": true, 00:08:44.110 "num_base_bdevs": 3, 00:08:44.110 "num_base_bdevs_discovered": 2, 00:08:44.110 "num_base_bdevs_operational": 3, 00:08:44.110 "base_bdevs_list": [ 00:08:44.110 { 00:08:44.110 "name": "BaseBdev1", 00:08:44.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.110 "is_configured": false, 00:08:44.110 "data_offset": 0, 00:08:44.110 "data_size": 0 00:08:44.110 }, 00:08:44.110 { 00:08:44.110 "name": "BaseBdev2", 00:08:44.110 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:44.110 "is_configured": true, 00:08:44.110 "data_offset": 2048, 00:08:44.110 "data_size": 63488 00:08:44.110 }, 00:08:44.110 { 00:08:44.110 "name": "BaseBdev3", 00:08:44.110 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:44.110 "is_configured": true, 00:08:44.111 "data_offset": 2048, 00:08:44.111 "data_size": 63488 00:08:44.111 } 00:08:44.111 ] 00:08:44.111 }' 00:08:44.111 02:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.111 02:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.387 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:08:44.646 [2024-07-25 02:33:31.443416] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:44.646 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.647 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.907 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:44.907 "name": "Existed_Raid", 00:08:44.907 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:44.907 "strip_size_kb": 64, 00:08:44.907 "state": "configuring", 00:08:44.907 "raid_level": "raid0", 00:08:44.907 "superblock": true, 00:08:44.907 "num_base_bdevs": 3, 00:08:44.907 "num_base_bdevs_discovered": 1, 00:08:44.907 "num_base_bdevs_operational": 3, 00:08:44.907 "base_bdevs_list": [ 00:08:44.907 { 00:08:44.907 "name": "BaseBdev1", 00:08:44.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.907 "is_configured": false, 00:08:44.907 "data_offset": 0, 00:08:44.907 "data_size": 0 00:08:44.907 }, 00:08:44.907 { 00:08:44.907 "name": null, 00:08:44.907 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:44.907 "is_configured": false, 00:08:44.907 "data_offset": 2048, 00:08:44.907 "data_size": 63488 00:08:44.907 }, 00:08:44.907 { 00:08:44.907 "name": "BaseBdev3", 00:08:44.907 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:44.907 "is_configured": true, 00:08:44.907 "data_offset": 2048, 00:08:44.907 "data_size": 63488 00:08:44.907 } 00:08:44.907 ] 00:08:44.907 }' 00:08:44.907 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:44.907 02:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.169 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.169 02:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.429 [2024-07-25 02:33:32.271988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.429 BaseBdev1 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:45.429 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:45.688 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.949 [ 00:08:45.949 { 00:08:45.949 "name": "BaseBdev1", 00:08:45.949 "aliases": [ 00:08:45.949 "48d622e4-4a2e-11ef-9c8e-7947904e2597" 00:08:45.949 ], 00:08:45.949 "product_name": "Malloc disk", 00:08:45.949 "block_size": 512, 00:08:45.949 "num_blocks": 65536, 00:08:45.949 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:45.949 "assigned_rate_limits": { 00:08:45.949 "rw_ios_per_sec": 0, 00:08:45.949 "rw_mbytes_per_sec": 0, 00:08:45.949 "r_mbytes_per_sec": 0, 00:08:45.949 "w_mbytes_per_sec": 0 00:08:45.949 }, 00:08:45.949 "claimed": true, 00:08:45.949 "claim_type": "exclusive_write", 00:08:45.949 "zoned": false, 00:08:45.949 "supported_io_types": { 00:08:45.949 "read": true, 00:08:45.949 "write": true, 00:08:45.949 "unmap": true, 00:08:45.949 "flush": true, 00:08:45.949 "reset": true, 00:08:45.949 "nvme_admin": false, 00:08:45.949 "nvme_io": false, 00:08:45.949 "nvme_io_md": false, 00:08:45.949 "write_zeroes": true, 00:08:45.949 "zcopy": true, 00:08:45.949 "get_zone_info": false, 00:08:45.949 "zone_management": false, 00:08:45.949 "zone_append": false, 00:08:45.949 "compare": false, 00:08:45.949 "compare_and_write": false, 00:08:45.949 "abort": true, 00:08:45.949 "seek_hole": false, 00:08:45.949 "seek_data": false, 00:08:45.949 "copy": true, 00:08:45.949 "nvme_iov_md": false 00:08:45.949 }, 00:08:45.949 "memory_domains": [ 00:08:45.949 { 00:08:45.949 "dma_device_id": "system", 00:08:45.949 "dma_device_type": 1 00:08:45.949 }, 00:08:45.949 { 00:08:45.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.949 "dma_device_type": 2 00:08:45.949 } 00:08:45.949 ], 00:08:45.949 "driver_specific": {} 00:08:45.949 } 00:08:45.949 ] 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.949 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.210 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.210 "name": "Existed_Raid", 00:08:46.210 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:46.210 "strip_size_kb": 64, 00:08:46.210 "state": "configuring", 00:08:46.210 "raid_level": "raid0", 00:08:46.210 "superblock": true, 00:08:46.210 "num_base_bdevs": 3, 00:08:46.210 "num_base_bdevs_discovered": 2, 00:08:46.210 "num_base_bdevs_operational": 3, 00:08:46.210 "base_bdevs_list": [ 00:08:46.210 { 00:08:46.210 "name": "BaseBdev1", 00:08:46.210 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:46.210 "is_configured": true, 00:08:46.210 "data_offset": 2048, 00:08:46.210 "data_size": 63488 00:08:46.210 }, 00:08:46.210 { 00:08:46.210 "name": null, 00:08:46.210 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:46.210 "is_configured": false, 00:08:46.210 "data_offset": 2048, 00:08:46.210 "data_size": 63488 00:08:46.210 }, 00:08:46.210 { 00:08:46.210 "name": "BaseBdev3", 00:08:46.210 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:46.210 "is_configured": true, 00:08:46.210 "data_offset": 2048, 00:08:46.210 "data_size": 63488 00:08:46.210 } 00:08:46.210 ] 00:08:46.210 }' 00:08:46.210 02:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.210 02:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.471 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.471 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.471 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:08:46.471 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:08:46.729 [2024-07-25 02:33:33.504576] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.729 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.988 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:46.988 "name": "Existed_Raid", 00:08:46.988 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:46.988 "strip_size_kb": 64, 00:08:46.988 "state": "configuring", 00:08:46.988 "raid_level": "raid0", 00:08:46.988 "superblock": true, 00:08:46.988 "num_base_bdevs": 3, 00:08:46.988 "num_base_bdevs_discovered": 1, 00:08:46.988 "num_base_bdevs_operational": 3, 00:08:46.988 "base_bdevs_list": [ 00:08:46.988 { 00:08:46.988 "name": "BaseBdev1", 00:08:46.988 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:46.988 "is_configured": true, 00:08:46.988 "data_offset": 2048, 00:08:46.988 "data_size": 63488 00:08:46.988 }, 00:08:46.988 { 00:08:46.988 "name": null, 00:08:46.988 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:46.988 "is_configured": false, 00:08:46.988 "data_offset": 2048, 00:08:46.988 "data_size": 63488 00:08:46.988 }, 00:08:46.988 { 00:08:46.988 "name": null, 00:08:46.988 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:46.988 "is_configured": false, 00:08:46.988 "data_offset": 2048, 00:08:46.988 "data_size": 63488 00:08:46.989 } 00:08:46.989 ] 00:08:46.989 }' 00:08:46.989 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:46.989 02:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.248 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.248 02:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.507 [2024-07-25 02:33:34.333003] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.507 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.767 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:47.767 "name": "Existed_Raid", 00:08:47.767 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:47.767 "strip_size_kb": 64, 00:08:47.767 "state": "configuring", 00:08:47.767 "raid_level": "raid0", 00:08:47.767 "superblock": true, 00:08:47.767 "num_base_bdevs": 3, 00:08:47.767 "num_base_bdevs_discovered": 2, 00:08:47.767 "num_base_bdevs_operational": 3, 00:08:47.767 "base_bdevs_list": [ 00:08:47.767 { 00:08:47.767 "name": "BaseBdev1", 00:08:47.767 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:47.767 "is_configured": true, 00:08:47.767 "data_offset": 2048, 00:08:47.767 "data_size": 63488 00:08:47.767 }, 00:08:47.767 { 00:08:47.767 "name": null, 00:08:47.767 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:47.767 "is_configured": false, 00:08:47.767 "data_offset": 2048, 00:08:47.767 "data_size": 63488 00:08:47.767 }, 00:08:47.767 { 00:08:47.767 "name": "BaseBdev3", 00:08:47.767 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:47.767 "is_configured": true, 00:08:47.767 "data_offset": 2048, 00:08:47.767 "data_size": 63488 00:08:47.767 } 00:08:47.767 ] 00:08:47.767 }' 00:08:47.767 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:47.767 02:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.026 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.026 02:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:48.285 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:08:48.285 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:48.285 [2024-07-25 02:33:35.169439] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.544 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.544 "name": "Existed_Raid", 00:08:48.544 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:48.544 "strip_size_kb": 64, 00:08:48.544 "state": "configuring", 00:08:48.544 "raid_level": "raid0", 00:08:48.544 "superblock": true, 00:08:48.544 "num_base_bdevs": 3, 00:08:48.544 "num_base_bdevs_discovered": 1, 00:08:48.544 "num_base_bdevs_operational": 3, 00:08:48.544 "base_bdevs_list": [ 00:08:48.544 { 00:08:48.544 "name": null, 00:08:48.544 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:48.544 "is_configured": false, 00:08:48.544 "data_offset": 2048, 00:08:48.544 "data_size": 63488 00:08:48.544 }, 00:08:48.544 { 00:08:48.544 "name": null, 00:08:48.544 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:48.544 "is_configured": false, 00:08:48.544 "data_offset": 2048, 00:08:48.544 "data_size": 63488 00:08:48.544 }, 00:08:48.544 { 00:08:48.544 "name": "BaseBdev3", 00:08:48.544 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:48.545 "is_configured": true, 00:08:48.545 "data_offset": 2048, 00:08:48.545 "data_size": 63488 00:08:48.545 } 00:08:48.545 ] 00:08:48.545 }' 00:08:48.545 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.545 02:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.804 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.804 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:49.063 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:08:49.063 02:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:49.323 [2024-07-25 02:33:36.010990] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.323 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.584 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.584 "name": "Existed_Raid", 00:08:49.584 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:49.584 "strip_size_kb": 64, 00:08:49.584 "state": "configuring", 00:08:49.584 "raid_level": "raid0", 00:08:49.584 "superblock": true, 00:08:49.584 "num_base_bdevs": 3, 00:08:49.584 "num_base_bdevs_discovered": 2, 00:08:49.584 "num_base_bdevs_operational": 3, 00:08:49.584 "base_bdevs_list": [ 00:08:49.584 { 00:08:49.584 "name": null, 00:08:49.584 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:49.584 "is_configured": false, 00:08:49.584 "data_offset": 2048, 00:08:49.584 "data_size": 63488 00:08:49.584 }, 00:08:49.584 { 00:08:49.584 "name": "BaseBdev2", 00:08:49.584 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:49.584 "is_configured": true, 00:08:49.584 "data_offset": 2048, 00:08:49.584 "data_size": 63488 00:08:49.584 }, 00:08:49.584 { 00:08:49.584 "name": "BaseBdev3", 00:08:49.584 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:49.584 "is_configured": true, 00:08:49.584 "data_offset": 2048, 00:08:49.584 "data_size": 63488 00:08:49.584 } 00:08:49.584 ] 00:08:49.584 }' 00:08:49.584 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.584 02:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.844 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.844 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.844 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:08:49.844 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.844 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:50.103 02:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 48d622e4-4a2e-11ef-9c8e-7947904e2597 00:08:50.363 [2024-07-25 02:33:37.071648] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:50.363 [2024-07-25 02:33:37.071689] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x279e2f434a00 00:08:50.363 [2024-07-25 02:33:37.071693] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.363 [2024-07-25 02:33:37.071710] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x279e2f497e20 00:08:50.363 [2024-07-25 02:33:37.071746] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x279e2f434a00 00:08:50.363 [2024-07-25 02:33:37.071749] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x279e2f434a00 00:08:50.363 [2024-07-25 02:33:37.071765] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.363 NewBaseBdev 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:50.363 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:50.624 [ 00:08:50.624 { 00:08:50.624 "name": "NewBaseBdev", 00:08:50.624 "aliases": [ 00:08:50.624 "48d622e4-4a2e-11ef-9c8e-7947904e2597" 00:08:50.624 ], 00:08:50.624 "product_name": "Malloc disk", 00:08:50.624 "block_size": 512, 00:08:50.624 "num_blocks": 65536, 00:08:50.624 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:50.624 "assigned_rate_limits": { 00:08:50.624 "rw_ios_per_sec": 0, 00:08:50.624 "rw_mbytes_per_sec": 0, 00:08:50.624 "r_mbytes_per_sec": 0, 00:08:50.624 "w_mbytes_per_sec": 0 00:08:50.624 }, 00:08:50.624 "claimed": true, 00:08:50.624 "claim_type": "exclusive_write", 00:08:50.624 "zoned": false, 00:08:50.624 "supported_io_types": { 00:08:50.624 "read": true, 00:08:50.624 "write": true, 00:08:50.624 "unmap": true, 00:08:50.624 "flush": true, 00:08:50.624 "reset": true, 00:08:50.624 "nvme_admin": false, 00:08:50.624 "nvme_io": false, 00:08:50.624 "nvme_io_md": false, 00:08:50.624 "write_zeroes": true, 00:08:50.624 "zcopy": true, 00:08:50.624 "get_zone_info": false, 00:08:50.624 "zone_management": false, 00:08:50.624 "zone_append": false, 00:08:50.624 "compare": false, 00:08:50.624 "compare_and_write": false, 00:08:50.624 "abort": true, 00:08:50.624 "seek_hole": false, 00:08:50.624 "seek_data": false, 00:08:50.624 "copy": true, 00:08:50.624 "nvme_iov_md": false 00:08:50.624 }, 00:08:50.624 "memory_domains": [ 00:08:50.624 { 00:08:50.624 "dma_device_id": "system", 00:08:50.624 "dma_device_type": 1 00:08:50.624 }, 00:08:50.624 { 00:08:50.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.624 "dma_device_type": 2 00:08:50.624 } 00:08:50.624 ], 00:08:50.624 "driver_specific": {} 00:08:50.624 } 00:08:50.624 ] 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.624 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.884 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:50.884 "name": "Existed_Raid", 00:08:50.884 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:50.884 "strip_size_kb": 64, 00:08:50.884 "state": "online", 00:08:50.884 "raid_level": "raid0", 00:08:50.884 "superblock": true, 00:08:50.884 "num_base_bdevs": 3, 00:08:50.884 "num_base_bdevs_discovered": 3, 00:08:50.884 "num_base_bdevs_operational": 3, 00:08:50.884 "base_bdevs_list": [ 00:08:50.884 { 00:08:50.884 "name": "NewBaseBdev", 00:08:50.884 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:50.884 "is_configured": true, 00:08:50.884 "data_offset": 2048, 00:08:50.884 "data_size": 63488 00:08:50.884 }, 00:08:50.884 { 00:08:50.884 "name": "BaseBdev2", 00:08:50.884 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:50.884 "is_configured": true, 00:08:50.884 "data_offset": 2048, 00:08:50.884 "data_size": 63488 00:08:50.884 }, 00:08:50.884 { 00:08:50.884 "name": "BaseBdev3", 00:08:50.884 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:50.884 "is_configured": true, 00:08:50.884 "data_offset": 2048, 00:08:50.884 "data_size": 63488 00:08:50.884 } 00:08:50.884 ] 00:08:50.884 }' 00:08:50.884 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:50.884 02:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:51.145 02:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:51.405 [2024-07-25 02:33:38.104022] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:51.405 "name": "Existed_Raid", 00:08:51.405 "aliases": [ 00:08:51.405 "47f4c9f0-4a2e-11ef-9c8e-7947904e2597" 00:08:51.405 ], 00:08:51.405 "product_name": "Raid Volume", 00:08:51.405 "block_size": 512, 00:08:51.405 "num_blocks": 190464, 00:08:51.405 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:51.405 "assigned_rate_limits": { 00:08:51.405 "rw_ios_per_sec": 0, 00:08:51.405 "rw_mbytes_per_sec": 0, 00:08:51.405 "r_mbytes_per_sec": 0, 00:08:51.405 "w_mbytes_per_sec": 0 00:08:51.405 }, 00:08:51.405 "claimed": false, 00:08:51.405 "zoned": false, 00:08:51.405 "supported_io_types": { 00:08:51.405 "read": true, 00:08:51.405 "write": true, 00:08:51.405 "unmap": true, 00:08:51.405 "flush": true, 00:08:51.405 "reset": true, 00:08:51.405 "nvme_admin": false, 00:08:51.405 "nvme_io": false, 00:08:51.405 "nvme_io_md": false, 00:08:51.405 "write_zeroes": true, 00:08:51.405 "zcopy": false, 00:08:51.405 "get_zone_info": false, 00:08:51.405 "zone_management": false, 00:08:51.405 "zone_append": false, 00:08:51.405 "compare": false, 00:08:51.405 "compare_and_write": false, 00:08:51.405 "abort": false, 00:08:51.405 "seek_hole": false, 00:08:51.405 "seek_data": false, 00:08:51.405 "copy": false, 00:08:51.405 "nvme_iov_md": false 00:08:51.405 }, 00:08:51.405 "memory_domains": [ 00:08:51.405 { 00:08:51.405 "dma_device_id": "system", 00:08:51.405 "dma_device_type": 1 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.405 "dma_device_type": 2 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "dma_device_id": "system", 00:08:51.405 "dma_device_type": 1 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.405 "dma_device_type": 2 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "dma_device_id": "system", 00:08:51.405 "dma_device_type": 1 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.405 "dma_device_type": 2 00:08:51.405 } 00:08:51.405 ], 00:08:51.405 "driver_specific": { 00:08:51.405 "raid": { 00:08:51.405 "uuid": "47f4c9f0-4a2e-11ef-9c8e-7947904e2597", 00:08:51.405 "strip_size_kb": 64, 00:08:51.405 "state": "online", 00:08:51.405 "raid_level": "raid0", 00:08:51.405 "superblock": true, 00:08:51.405 "num_base_bdevs": 3, 00:08:51.405 "num_base_bdevs_discovered": 3, 00:08:51.405 "num_base_bdevs_operational": 3, 00:08:51.405 "base_bdevs_list": [ 00:08:51.405 { 00:08:51.405 "name": "NewBaseBdev", 00:08:51.405 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:51.405 "is_configured": true, 00:08:51.405 "data_offset": 2048, 00:08:51.405 "data_size": 63488 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "name": "BaseBdev2", 00:08:51.405 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:51.405 "is_configured": true, 00:08:51.405 "data_offset": 2048, 00:08:51.405 "data_size": 63488 00:08:51.405 }, 00:08:51.405 { 00:08:51.405 "name": "BaseBdev3", 00:08:51.405 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:51.405 "is_configured": true, 00:08:51.405 "data_offset": 2048, 00:08:51.405 "data_size": 63488 00:08:51.405 } 00:08:51.405 ] 00:08:51.405 } 00:08:51.405 } 00:08:51.405 }' 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:08:51.405 BaseBdev2 00:08:51.405 BaseBdev3' 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:08:51.405 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:51.666 "name": "NewBaseBdev", 00:08:51.666 "aliases": [ 00:08:51.666 "48d622e4-4a2e-11ef-9c8e-7947904e2597" 00:08:51.666 ], 00:08:51.666 "product_name": "Malloc disk", 00:08:51.666 "block_size": 512, 00:08:51.666 "num_blocks": 65536, 00:08:51.666 "uuid": "48d622e4-4a2e-11ef-9c8e-7947904e2597", 00:08:51.666 "assigned_rate_limits": { 00:08:51.666 "rw_ios_per_sec": 0, 00:08:51.666 "rw_mbytes_per_sec": 0, 00:08:51.666 "r_mbytes_per_sec": 0, 00:08:51.666 "w_mbytes_per_sec": 0 00:08:51.666 }, 00:08:51.666 "claimed": true, 00:08:51.666 "claim_type": "exclusive_write", 00:08:51.666 "zoned": false, 00:08:51.666 "supported_io_types": { 00:08:51.666 "read": true, 00:08:51.666 "write": true, 00:08:51.666 "unmap": true, 00:08:51.666 "flush": true, 00:08:51.666 "reset": true, 00:08:51.666 "nvme_admin": false, 00:08:51.666 "nvme_io": false, 00:08:51.666 "nvme_io_md": false, 00:08:51.666 "write_zeroes": true, 00:08:51.666 "zcopy": true, 00:08:51.666 "get_zone_info": false, 00:08:51.666 "zone_management": false, 00:08:51.666 "zone_append": false, 00:08:51.666 "compare": false, 00:08:51.666 "compare_and_write": false, 00:08:51.666 "abort": true, 00:08:51.666 "seek_hole": false, 00:08:51.666 "seek_data": false, 00:08:51.666 "copy": true, 00:08:51.666 "nvme_iov_md": false 00:08:51.666 }, 00:08:51.666 "memory_domains": [ 00:08:51.666 { 00:08:51.666 "dma_device_id": "system", 00:08:51.666 "dma_device_type": 1 00:08:51.666 }, 00:08:51.666 { 00:08:51.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.666 "dma_device_type": 2 00:08:51.666 } 00:08:51.666 ], 00:08:51.666 "driver_specific": {} 00:08:51.666 }' 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:51.666 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:51.927 "name": "BaseBdev2", 00:08:51.927 "aliases": [ 00:08:51.927 "474eadcb-4a2e-11ef-9c8e-7947904e2597" 00:08:51.927 ], 00:08:51.927 "product_name": "Malloc disk", 00:08:51.927 "block_size": 512, 00:08:51.927 "num_blocks": 65536, 00:08:51.927 "uuid": "474eadcb-4a2e-11ef-9c8e-7947904e2597", 00:08:51.927 "assigned_rate_limits": { 00:08:51.927 "rw_ios_per_sec": 0, 00:08:51.927 "rw_mbytes_per_sec": 0, 00:08:51.927 "r_mbytes_per_sec": 0, 00:08:51.927 "w_mbytes_per_sec": 0 00:08:51.927 }, 00:08:51.927 "claimed": true, 00:08:51.927 "claim_type": "exclusive_write", 00:08:51.927 "zoned": false, 00:08:51.927 "supported_io_types": { 00:08:51.927 "read": true, 00:08:51.927 "write": true, 00:08:51.927 "unmap": true, 00:08:51.927 "flush": true, 00:08:51.927 "reset": true, 00:08:51.927 "nvme_admin": false, 00:08:51.927 "nvme_io": false, 00:08:51.927 "nvme_io_md": false, 00:08:51.927 "write_zeroes": true, 00:08:51.927 "zcopy": true, 00:08:51.927 "get_zone_info": false, 00:08:51.927 "zone_management": false, 00:08:51.927 "zone_append": false, 00:08:51.927 "compare": false, 00:08:51.927 "compare_and_write": false, 00:08:51.927 "abort": true, 00:08:51.927 "seek_hole": false, 00:08:51.927 "seek_data": false, 00:08:51.927 "copy": true, 00:08:51.927 "nvme_iov_md": false 00:08:51.927 }, 00:08:51.927 "memory_domains": [ 00:08:51.927 { 00:08:51.927 "dma_device_id": "system", 00:08:51.927 "dma_device_type": 1 00:08:51.927 }, 00:08:51.927 { 00:08:51.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.927 "dma_device_type": 2 00:08:51.927 } 00:08:51.927 ], 00:08:51.927 "driver_specific": {} 00:08:51.927 }' 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:51.927 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:52.187 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:52.187 "name": "BaseBdev3", 00:08:52.187 "aliases": [ 00:08:52.187 "47a4c9c2-4a2e-11ef-9c8e-7947904e2597" 00:08:52.187 ], 00:08:52.187 "product_name": "Malloc disk", 00:08:52.187 "block_size": 512, 00:08:52.187 "num_blocks": 65536, 00:08:52.187 "uuid": "47a4c9c2-4a2e-11ef-9c8e-7947904e2597", 00:08:52.187 "assigned_rate_limits": { 00:08:52.187 "rw_ios_per_sec": 0, 00:08:52.187 "rw_mbytes_per_sec": 0, 00:08:52.188 "r_mbytes_per_sec": 0, 00:08:52.188 "w_mbytes_per_sec": 0 00:08:52.188 }, 00:08:52.188 "claimed": true, 00:08:52.188 "claim_type": "exclusive_write", 00:08:52.188 "zoned": false, 00:08:52.188 "supported_io_types": { 00:08:52.188 "read": true, 00:08:52.188 "write": true, 00:08:52.188 "unmap": true, 00:08:52.188 "flush": true, 00:08:52.188 "reset": true, 00:08:52.188 "nvme_admin": false, 00:08:52.188 "nvme_io": false, 00:08:52.188 "nvme_io_md": false, 00:08:52.188 "write_zeroes": true, 00:08:52.188 "zcopy": true, 00:08:52.188 "get_zone_info": false, 00:08:52.188 "zone_management": false, 00:08:52.188 "zone_append": false, 00:08:52.188 "compare": false, 00:08:52.188 "compare_and_write": false, 00:08:52.188 "abort": true, 00:08:52.188 "seek_hole": false, 00:08:52.188 "seek_data": false, 00:08:52.188 "copy": true, 00:08:52.188 "nvme_iov_md": false 00:08:52.188 }, 00:08:52.188 "memory_domains": [ 00:08:52.188 { 00:08:52.188 "dma_device_id": "system", 00:08:52.188 "dma_device_type": 1 00:08:52.188 }, 00:08:52.188 { 00:08:52.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.188 "dma_device_type": 2 00:08:52.188 } 00:08:52.188 ], 00:08:52.188 "driver_specific": {} 00:08:52.188 }' 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:52.188 02:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:52.448 [2024-07-25 02:33:39.136475] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.448 [2024-07-25 02:33:39.136491] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.448 [2024-07-25 02:33:39.136506] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.448 [2024-07-25 02:33:39.136528] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.448 [2024-07-25 02:33:39.136532] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x279e2f434a00 name Existed_Raid, state offline 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52571 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52571 ']' 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52571 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52571 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:52.448 killing process with pid 52571 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52571' 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52571 00:08:52.448 [2024-07-25 02:33:39.180408] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.448 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52571 00:08:52.448 [2024-07-25 02:33:39.208388] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.709 02:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:52.709 00:08:52.709 real 0m18.308s 00:08:52.709 user 0m32.703s 00:08:52.709 sys 0m3.163s 00:08:52.709 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.709 02:33:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.709 ************************************ 00:08:52.709 END TEST raid_state_function_test_sb 00:08:52.709 ************************************ 00:08:52.709 02:33:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:52.709 02:33:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:52.709 02:33:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:52.709 02:33:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.709 02:33:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.709 ************************************ 00:08:52.709 START TEST raid_superblock_test 00:08:52.709 ************************************ 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53279 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53279 /var/tmp/spdk-raid.sock 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53279 ']' 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:52.709 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:52.710 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.710 02:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.710 [2024-07-25 02:33:39.550946] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:08:52.710 [2024-07-25 02:33:39.551160] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:53.280 EAL: TSC is not safe to use in SMP mode 00:08:53.280 EAL: TSC is not invariant 00:08:53.280 [2024-07-25 02:33:39.968254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.280 [2024-07-25 02:33:40.085946] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:53.280 [2024-07-25 02:33:40.088345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.280 [2024-07-25 02:33:40.089011] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.280 [2024-07-25 02:33:40.089024] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:53.850 malloc1 00:08:53.850 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.110 [2024-07-25 02:33:40.805620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.111 [2024-07-25 02:33:40.805677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.111 [2024-07-25 02:33:40.805686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9434780 00:08:54.111 [2024-07-25 02:33:40.805692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.111 [2024-07-25 02:33:40.806663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.111 [2024-07-25 02:33:40.806695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.111 pt1 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.111 02:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:54.111 malloc2 00:08:54.370 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.371 [2024-07-25 02:33:41.193772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.371 [2024-07-25 02:33:41.193807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.371 [2024-07-25 02:33:41.193814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9434c80 00:08:54.371 [2024-07-25 02:33:41.193820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.371 [2024-07-25 02:33:41.194204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.371 [2024-07-25 02:33:41.194227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.371 pt2 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.371 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:54.630 malloc3 00:08:54.630 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:54.890 [2024-07-25 02:33:41.581957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:54.890 [2024-07-25 02:33:41.582014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.890 [2024-07-25 02:33:41.582022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9435180 00:08:54.890 [2024-07-25 02:33:41.582028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.890 [2024-07-25 02:33:41.582733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.890 [2024-07-25 02:33:41.582768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:54.890 pt3 00:08:54.890 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:54.890 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:54.890 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:54.890 [2024-07-25 02:33:41.774033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.890 [2024-07-25 02:33:41.774426] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.890 [2024-07-25 02:33:41.774450] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:54.890 [2024-07-25 02:33:41.774501] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f21b9435400 00:08:54.890 [2024-07-25 02:33:41.774509] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.890 [2024-07-25 02:33:41.774540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f21b9497e20 00:08:54.890 [2024-07-25 02:33:41.774599] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f21b9435400 00:08:54.890 [2024-07-25 02:33:41.774603] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f21b9435400 00:08:54.890 [2024-07-25 02:33:41.774620] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.151 "name": "raid_bdev1", 00:08:55.151 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:08:55.151 "strip_size_kb": 64, 00:08:55.151 "state": "online", 00:08:55.151 "raid_level": "raid0", 00:08:55.151 "superblock": true, 00:08:55.151 "num_base_bdevs": 3, 00:08:55.151 "num_base_bdevs_discovered": 3, 00:08:55.151 "num_base_bdevs_operational": 3, 00:08:55.151 "base_bdevs_list": [ 00:08:55.151 { 00:08:55.151 "name": "pt1", 00:08:55.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.151 "is_configured": true, 00:08:55.151 "data_offset": 2048, 00:08:55.151 "data_size": 63488 00:08:55.151 }, 00:08:55.151 { 00:08:55.151 "name": "pt2", 00:08:55.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.151 "is_configured": true, 00:08:55.151 "data_offset": 2048, 00:08:55.151 "data_size": 63488 00:08:55.151 }, 00:08:55.151 { 00:08:55.151 "name": "pt3", 00:08:55.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.151 "is_configured": true, 00:08:55.151 "data_offset": 2048, 00:08:55.151 "data_size": 63488 00:08:55.151 } 00:08:55.151 ] 00:08:55.151 }' 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.151 02:33:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:55.411 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:55.671 [2024-07-25 02:33:42.406323] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:55.671 "name": "raid_bdev1", 00:08:55.671 "aliases": [ 00:08:55.671 "4e800bfb-4a2e-11ef-9c8e-7947904e2597" 00:08:55.671 ], 00:08:55.671 "product_name": "Raid Volume", 00:08:55.671 "block_size": 512, 00:08:55.671 "num_blocks": 190464, 00:08:55.671 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:08:55.671 "assigned_rate_limits": { 00:08:55.671 "rw_ios_per_sec": 0, 00:08:55.671 "rw_mbytes_per_sec": 0, 00:08:55.671 "r_mbytes_per_sec": 0, 00:08:55.671 "w_mbytes_per_sec": 0 00:08:55.671 }, 00:08:55.671 "claimed": false, 00:08:55.671 "zoned": false, 00:08:55.671 "supported_io_types": { 00:08:55.671 "read": true, 00:08:55.671 "write": true, 00:08:55.671 "unmap": true, 00:08:55.671 "flush": true, 00:08:55.671 "reset": true, 00:08:55.671 "nvme_admin": false, 00:08:55.671 "nvme_io": false, 00:08:55.671 "nvme_io_md": false, 00:08:55.671 "write_zeroes": true, 00:08:55.671 "zcopy": false, 00:08:55.671 "get_zone_info": false, 00:08:55.671 "zone_management": false, 00:08:55.671 "zone_append": false, 00:08:55.671 "compare": false, 00:08:55.671 "compare_and_write": false, 00:08:55.671 "abort": false, 00:08:55.671 "seek_hole": false, 00:08:55.671 "seek_data": false, 00:08:55.671 "copy": false, 00:08:55.671 "nvme_iov_md": false 00:08:55.671 }, 00:08:55.671 "memory_domains": [ 00:08:55.671 { 00:08:55.671 "dma_device_id": "system", 00:08:55.671 "dma_device_type": 1 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.671 "dma_device_type": 2 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "dma_device_id": "system", 00:08:55.671 "dma_device_type": 1 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.671 "dma_device_type": 2 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "dma_device_id": "system", 00:08:55.671 "dma_device_type": 1 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.671 "dma_device_type": 2 00:08:55.671 } 00:08:55.671 ], 00:08:55.671 "driver_specific": { 00:08:55.671 "raid": { 00:08:55.671 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:08:55.671 "strip_size_kb": 64, 00:08:55.671 "state": "online", 00:08:55.671 "raid_level": "raid0", 00:08:55.671 "superblock": true, 00:08:55.671 "num_base_bdevs": 3, 00:08:55.671 "num_base_bdevs_discovered": 3, 00:08:55.671 "num_base_bdevs_operational": 3, 00:08:55.671 "base_bdevs_list": [ 00:08:55.671 { 00:08:55.671 "name": "pt1", 00:08:55.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.671 "is_configured": true, 00:08:55.671 "data_offset": 2048, 00:08:55.671 "data_size": 63488 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "name": "pt2", 00:08:55.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.671 "is_configured": true, 00:08:55.671 "data_offset": 2048, 00:08:55.671 "data_size": 63488 00:08:55.671 }, 00:08:55.671 { 00:08:55.671 "name": "pt3", 00:08:55.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.671 "is_configured": true, 00:08:55.671 "data_offset": 2048, 00:08:55.671 "data_size": 63488 00:08:55.671 } 00:08:55.671 ] 00:08:55.671 } 00:08:55.671 } 00:08:55.671 }' 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:55.671 pt2 00:08:55.671 pt3' 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:55.671 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:55.931 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:55.931 "name": "pt1", 00:08:55.931 "aliases": [ 00:08:55.931 "00000000-0000-0000-0000-000000000001" 00:08:55.931 ], 00:08:55.931 "product_name": "passthru", 00:08:55.931 "block_size": 512, 00:08:55.931 "num_blocks": 65536, 00:08:55.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.931 "assigned_rate_limits": { 00:08:55.931 "rw_ios_per_sec": 0, 00:08:55.931 "rw_mbytes_per_sec": 0, 00:08:55.931 "r_mbytes_per_sec": 0, 00:08:55.931 "w_mbytes_per_sec": 0 00:08:55.931 }, 00:08:55.931 "claimed": true, 00:08:55.931 "claim_type": "exclusive_write", 00:08:55.931 "zoned": false, 00:08:55.931 "supported_io_types": { 00:08:55.931 "read": true, 00:08:55.931 "write": true, 00:08:55.931 "unmap": true, 00:08:55.931 "flush": true, 00:08:55.931 "reset": true, 00:08:55.931 "nvme_admin": false, 00:08:55.931 "nvme_io": false, 00:08:55.931 "nvme_io_md": false, 00:08:55.931 "write_zeroes": true, 00:08:55.931 "zcopy": true, 00:08:55.931 "get_zone_info": false, 00:08:55.931 "zone_management": false, 00:08:55.931 "zone_append": false, 00:08:55.931 "compare": false, 00:08:55.931 "compare_and_write": false, 00:08:55.931 "abort": true, 00:08:55.931 "seek_hole": false, 00:08:55.931 "seek_data": false, 00:08:55.931 "copy": true, 00:08:55.931 "nvme_iov_md": false 00:08:55.931 }, 00:08:55.931 "memory_domains": [ 00:08:55.931 { 00:08:55.931 "dma_device_id": "system", 00:08:55.931 "dma_device_type": 1 00:08:55.931 }, 00:08:55.931 { 00:08:55.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.931 "dma_device_type": 2 00:08:55.931 } 00:08:55.931 ], 00:08:55.931 "driver_specific": { 00:08:55.931 "passthru": { 00:08:55.931 "name": "pt1", 00:08:55.931 "base_bdev_name": "malloc1" 00:08:55.932 } 00:08:55.932 } 00:08:55.932 }' 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:55.932 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:56.192 "name": "pt2", 00:08:56.192 "aliases": [ 00:08:56.192 "00000000-0000-0000-0000-000000000002" 00:08:56.192 ], 00:08:56.192 "product_name": "passthru", 00:08:56.192 "block_size": 512, 00:08:56.192 "num_blocks": 65536, 00:08:56.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.192 "assigned_rate_limits": { 00:08:56.192 "rw_ios_per_sec": 0, 00:08:56.192 "rw_mbytes_per_sec": 0, 00:08:56.192 "r_mbytes_per_sec": 0, 00:08:56.192 "w_mbytes_per_sec": 0 00:08:56.192 }, 00:08:56.192 "claimed": true, 00:08:56.192 "claim_type": "exclusive_write", 00:08:56.192 "zoned": false, 00:08:56.192 "supported_io_types": { 00:08:56.192 "read": true, 00:08:56.192 "write": true, 00:08:56.192 "unmap": true, 00:08:56.192 "flush": true, 00:08:56.192 "reset": true, 00:08:56.192 "nvme_admin": false, 00:08:56.192 "nvme_io": false, 00:08:56.192 "nvme_io_md": false, 00:08:56.192 "write_zeroes": true, 00:08:56.192 "zcopy": true, 00:08:56.192 "get_zone_info": false, 00:08:56.192 "zone_management": false, 00:08:56.192 "zone_append": false, 00:08:56.192 "compare": false, 00:08:56.192 "compare_and_write": false, 00:08:56.192 "abort": true, 00:08:56.192 "seek_hole": false, 00:08:56.192 "seek_data": false, 00:08:56.192 "copy": true, 00:08:56.192 "nvme_iov_md": false 00:08:56.192 }, 00:08:56.192 "memory_domains": [ 00:08:56.192 { 00:08:56.192 "dma_device_id": "system", 00:08:56.192 "dma_device_type": 1 00:08:56.192 }, 00:08:56.192 { 00:08:56.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.192 "dma_device_type": 2 00:08:56.192 } 00:08:56.192 ], 00:08:56.192 "driver_specific": { 00:08:56.192 "passthru": { 00:08:56.192 "name": "pt2", 00:08:56.192 "base_bdev_name": "malloc2" 00:08:56.192 } 00:08:56.192 } 00:08:56.192 }' 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:08:56.192 02:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:56.452 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:56.452 "name": "pt3", 00:08:56.452 "aliases": [ 00:08:56.452 "00000000-0000-0000-0000-000000000003" 00:08:56.452 ], 00:08:56.452 "product_name": "passthru", 00:08:56.452 "block_size": 512, 00:08:56.452 "num_blocks": 65536, 00:08:56.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.452 "assigned_rate_limits": { 00:08:56.452 "rw_ios_per_sec": 0, 00:08:56.452 "rw_mbytes_per_sec": 0, 00:08:56.452 "r_mbytes_per_sec": 0, 00:08:56.452 "w_mbytes_per_sec": 0 00:08:56.452 }, 00:08:56.452 "claimed": true, 00:08:56.452 "claim_type": "exclusive_write", 00:08:56.452 "zoned": false, 00:08:56.452 "supported_io_types": { 00:08:56.452 "read": true, 00:08:56.452 "write": true, 00:08:56.452 "unmap": true, 00:08:56.452 "flush": true, 00:08:56.452 "reset": true, 00:08:56.452 "nvme_admin": false, 00:08:56.452 "nvme_io": false, 00:08:56.452 "nvme_io_md": false, 00:08:56.452 "write_zeroes": true, 00:08:56.452 "zcopy": true, 00:08:56.452 "get_zone_info": false, 00:08:56.452 "zone_management": false, 00:08:56.453 "zone_append": false, 00:08:56.453 "compare": false, 00:08:56.453 "compare_and_write": false, 00:08:56.453 "abort": true, 00:08:56.453 "seek_hole": false, 00:08:56.453 "seek_data": false, 00:08:56.453 "copy": true, 00:08:56.453 "nvme_iov_md": false 00:08:56.453 }, 00:08:56.453 "memory_domains": [ 00:08:56.453 { 00:08:56.453 "dma_device_id": "system", 00:08:56.453 "dma_device_type": 1 00:08:56.453 }, 00:08:56.453 { 00:08:56.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.453 "dma_device_type": 2 00:08:56.453 } 00:08:56.453 ], 00:08:56.453 "driver_specific": { 00:08:56.453 "passthru": { 00:08:56.453 "name": "pt3", 00:08:56.453 "base_bdev_name": "malloc3" 00:08:56.453 } 00:08:56.453 } 00:08:56.453 }' 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:56.453 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:56.711 [2024-07-25 02:33:43.438736] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.711 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4e800bfb-4a2e-11ef-9c8e-7947904e2597 00:08:56.711 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4e800bfb-4a2e-11ef-9c8e-7947904e2597 ']' 00:08:56.711 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:56.970 [2024-07-25 02:33:43.638794] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.970 [2024-07-25 02:33:43.638806] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.970 [2024-07-25 02:33:43.638822] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.970 [2024-07-25 02:33:43.638835] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.970 [2024-07-25 02:33:43.638838] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f21b9435400 name raid_bdev1, state offline 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.970 02:33:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:57.228 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:57.228 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:57.486 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:57.486 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.756 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:57.757 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:57.757 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:58.032 [2024-07-25 02:33:44.747273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.032 [2024-07-25 02:33:44.747995] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.032 [2024-07-25 02:33:44.748018] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.032 [2024-07-25 02:33:44.748034] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.032 [2024-07-25 02:33:44.748080] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.032 [2024-07-25 02:33:44.748090] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.032 [2024-07-25 02:33:44.748097] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.032 [2024-07-25 02:33:44.748102] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f21b9435180 name raid_bdev1, state configuring 00:08:58.032 request: 00:08:58.032 { 00:08:58.032 "name": "raid_bdev1", 00:08:58.032 "raid_level": "raid0", 00:08:58.032 "base_bdevs": [ 00:08:58.032 "malloc1", 00:08:58.032 "malloc2", 00:08:58.032 "malloc3" 00:08:58.032 ], 00:08:58.032 "strip_size_kb": 64, 00:08:58.032 "superblock": false, 00:08:58.032 "method": "bdev_raid_create", 00:08:58.032 "req_id": 1 00:08:58.032 } 00:08:58.032 Got JSON-RPC error response 00:08:58.032 response: 00:08:58.032 { 00:08:58.032 "code": -17, 00:08:58.032 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.032 } 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.032 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:58.292 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:58.292 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:58.292 02:33:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.292 [2024-07-25 02:33:45.119419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.292 [2024-07-25 02:33:45.119476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.292 [2024-07-25 02:33:45.119490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9434c80 00:08:58.292 [2024-07-25 02:33:45.119497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.292 [2024-07-25 02:33:45.120239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.292 [2024-07-25 02:33:45.120270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.292 [2024-07-25 02:33:45.120289] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.292 [2024-07-25 02:33:45.120300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.292 pt1 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.292 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.551 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:58.551 "name": "raid_bdev1", 00:08:58.551 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:08:58.551 "strip_size_kb": 64, 00:08:58.551 "state": "configuring", 00:08:58.551 "raid_level": "raid0", 00:08:58.551 "superblock": true, 00:08:58.551 "num_base_bdevs": 3, 00:08:58.551 "num_base_bdevs_discovered": 1, 00:08:58.551 "num_base_bdevs_operational": 3, 00:08:58.551 "base_bdevs_list": [ 00:08:58.551 { 00:08:58.551 "name": "pt1", 00:08:58.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.551 "is_configured": true, 00:08:58.551 "data_offset": 2048, 00:08:58.551 "data_size": 63488 00:08:58.551 }, 00:08:58.552 { 00:08:58.552 "name": null, 00:08:58.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.552 "is_configured": false, 00:08:58.552 "data_offset": 2048, 00:08:58.552 "data_size": 63488 00:08:58.552 }, 00:08:58.552 { 00:08:58.552 "name": null, 00:08:58.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.552 "is_configured": false, 00:08:58.552 "data_offset": 2048, 00:08:58.552 "data_size": 63488 00:08:58.552 } 00:08:58.552 ] 00:08:58.552 }' 00:08:58.552 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:58.552 02:33:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.811 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:08:58.811 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.071 [2024-07-25 02:33:45.775691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.071 [2024-07-25 02:33:45.775723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.071 [2024-07-25 02:33:45.775731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9435680 00:08:59.071 [2024-07-25 02:33:45.775738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.071 [2024-07-25 02:33:45.775828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.071 [2024-07-25 02:33:45.775835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.071 [2024-07-25 02:33:45.775850] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.071 [2024-07-25 02:33:45.775857] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.071 pt2 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:59.071 [2024-07-25 02:33:45.955753] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.071 02:33:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.355 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.355 "name": "raid_bdev1", 00:08:59.355 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:08:59.355 "strip_size_kb": 64, 00:08:59.355 "state": "configuring", 00:08:59.355 "raid_level": "raid0", 00:08:59.355 "superblock": true, 00:08:59.355 "num_base_bdevs": 3, 00:08:59.355 "num_base_bdevs_discovered": 1, 00:08:59.355 "num_base_bdevs_operational": 3, 00:08:59.355 "base_bdevs_list": [ 00:08:59.355 { 00:08:59.355 "name": "pt1", 00:08:59.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.355 "is_configured": true, 00:08:59.355 "data_offset": 2048, 00:08:59.355 "data_size": 63488 00:08:59.355 }, 00:08:59.355 { 00:08:59.355 "name": null, 00:08:59.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.355 "is_configured": false, 00:08:59.355 "data_offset": 2048, 00:08:59.355 "data_size": 63488 00:08:59.355 }, 00:08:59.355 { 00:08:59.355 "name": null, 00:08:59.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.355 "is_configured": false, 00:08:59.355 "data_offset": 2048, 00:08:59.355 "data_size": 63488 00:08:59.355 } 00:08:59.355 ] 00:08:59.355 }' 00:08:59.355 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.355 02:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.614 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:59.614 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:59.614 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.873 [2024-07-25 02:33:46.584000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.873 [2024-07-25 02:33:46.584027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.873 [2024-07-25 02:33:46.584035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9435680 00:08:59.873 [2024-07-25 02:33:46.584042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.873 [2024-07-25 02:33:46.584124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.873 [2024-07-25 02:33:46.584132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.873 [2024-07-25 02:33:46.584146] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.873 [2024-07-25 02:33:46.584152] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.873 pt2 00:08:59.873 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:59.873 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:59.873 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.873 [2024-07-25 02:33:46.768068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.873 [2024-07-25 02:33:46.768091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.873 [2024-07-25 02:33:46.768099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f21b9435400 00:08:59.873 [2024-07-25 02:33:46.768105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.873 [2024-07-25 02:33:46.768155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.873 [2024-07-25 02:33:46.768162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.873 [2024-07-25 02:33:46.768174] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.873 [2024-07-25 02:33:46.768179] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.873 [2024-07-25 02:33:46.768197] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f21b9434780 00:08:59.873 [2024-07-25 02:33:46.768201] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.873 [2024-07-25 02:33:46.768217] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f21b9497e20 00:08:59.873 [2024-07-25 02:33:46.768263] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f21b9434780 00:08:59.873 [2024-07-25 02:33:46.768267] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f21b9434780 00:08:59.873 [2024-07-25 02:33:46.768282] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.873 pt3 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:00.132 "name": "raid_bdev1", 00:09:00.132 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:09:00.132 "strip_size_kb": 64, 00:09:00.132 "state": "online", 00:09:00.132 "raid_level": "raid0", 00:09:00.132 "superblock": true, 00:09:00.132 "num_base_bdevs": 3, 00:09:00.132 "num_base_bdevs_discovered": 3, 00:09:00.132 "num_base_bdevs_operational": 3, 00:09:00.132 "base_bdevs_list": [ 00:09:00.132 { 00:09:00.132 "name": "pt1", 00:09:00.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.132 "is_configured": true, 00:09:00.132 "data_offset": 2048, 00:09:00.132 "data_size": 63488 00:09:00.132 }, 00:09:00.132 { 00:09:00.132 "name": "pt2", 00:09:00.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.132 "is_configured": true, 00:09:00.132 "data_offset": 2048, 00:09:00.132 "data_size": 63488 00:09:00.132 }, 00:09:00.132 { 00:09:00.132 "name": "pt3", 00:09:00.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.132 "is_configured": true, 00:09:00.132 "data_offset": 2048, 00:09:00.132 "data_size": 63488 00:09:00.132 } 00:09:00.132 ] 00:09:00.132 }' 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:00.132 02:33:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:00.401 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:00.660 [2024-07-25 02:33:47.416346] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.660 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:00.660 "name": "raid_bdev1", 00:09:00.660 "aliases": [ 00:09:00.660 "4e800bfb-4a2e-11ef-9c8e-7947904e2597" 00:09:00.660 ], 00:09:00.660 "product_name": "Raid Volume", 00:09:00.660 "block_size": 512, 00:09:00.660 "num_blocks": 190464, 00:09:00.660 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:09:00.660 "assigned_rate_limits": { 00:09:00.660 "rw_ios_per_sec": 0, 00:09:00.660 "rw_mbytes_per_sec": 0, 00:09:00.660 "r_mbytes_per_sec": 0, 00:09:00.660 "w_mbytes_per_sec": 0 00:09:00.660 }, 00:09:00.660 "claimed": false, 00:09:00.660 "zoned": false, 00:09:00.660 "supported_io_types": { 00:09:00.660 "read": true, 00:09:00.660 "write": true, 00:09:00.660 "unmap": true, 00:09:00.660 "flush": true, 00:09:00.660 "reset": true, 00:09:00.660 "nvme_admin": false, 00:09:00.660 "nvme_io": false, 00:09:00.660 "nvme_io_md": false, 00:09:00.660 "write_zeroes": true, 00:09:00.660 "zcopy": false, 00:09:00.660 "get_zone_info": false, 00:09:00.660 "zone_management": false, 00:09:00.660 "zone_append": false, 00:09:00.660 "compare": false, 00:09:00.660 "compare_and_write": false, 00:09:00.660 "abort": false, 00:09:00.660 "seek_hole": false, 00:09:00.660 "seek_data": false, 00:09:00.660 "copy": false, 00:09:00.660 "nvme_iov_md": false 00:09:00.660 }, 00:09:00.660 "memory_domains": [ 00:09:00.660 { 00:09:00.660 "dma_device_id": "system", 00:09:00.660 "dma_device_type": 1 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.660 "dma_device_type": 2 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "dma_device_id": "system", 00:09:00.660 "dma_device_type": 1 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.660 "dma_device_type": 2 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "dma_device_id": "system", 00:09:00.660 "dma_device_type": 1 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.660 "dma_device_type": 2 00:09:00.660 } 00:09:00.660 ], 00:09:00.660 "driver_specific": { 00:09:00.660 "raid": { 00:09:00.660 "uuid": "4e800bfb-4a2e-11ef-9c8e-7947904e2597", 00:09:00.660 "strip_size_kb": 64, 00:09:00.660 "state": "online", 00:09:00.660 "raid_level": "raid0", 00:09:00.660 "superblock": true, 00:09:00.660 "num_base_bdevs": 3, 00:09:00.660 "num_base_bdevs_discovered": 3, 00:09:00.660 "num_base_bdevs_operational": 3, 00:09:00.660 "base_bdevs_list": [ 00:09:00.660 { 00:09:00.660 "name": "pt1", 00:09:00.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.660 "is_configured": true, 00:09:00.660 "data_offset": 2048, 00:09:00.660 "data_size": 63488 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "name": "pt2", 00:09:00.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.660 "is_configured": true, 00:09:00.660 "data_offset": 2048, 00:09:00.660 "data_size": 63488 00:09:00.660 }, 00:09:00.660 { 00:09:00.660 "name": "pt3", 00:09:00.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.660 "is_configured": true, 00:09:00.660 "data_offset": 2048, 00:09:00.660 "data_size": 63488 00:09:00.660 } 00:09:00.660 ] 00:09:00.660 } 00:09:00.660 } 00:09:00.660 }' 00:09:00.660 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.660 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:00.660 pt2 00:09:00.660 pt3' 00:09:00.660 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:00.660 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:00.661 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:00.920 "name": "pt1", 00:09:00.920 "aliases": [ 00:09:00.920 "00000000-0000-0000-0000-000000000001" 00:09:00.920 ], 00:09:00.920 "product_name": "passthru", 00:09:00.920 "block_size": 512, 00:09:00.920 "num_blocks": 65536, 00:09:00.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.920 "assigned_rate_limits": { 00:09:00.920 "rw_ios_per_sec": 0, 00:09:00.920 "rw_mbytes_per_sec": 0, 00:09:00.920 "r_mbytes_per_sec": 0, 00:09:00.920 "w_mbytes_per_sec": 0 00:09:00.920 }, 00:09:00.920 "claimed": true, 00:09:00.920 "claim_type": "exclusive_write", 00:09:00.920 "zoned": false, 00:09:00.920 "supported_io_types": { 00:09:00.920 "read": true, 00:09:00.920 "write": true, 00:09:00.920 "unmap": true, 00:09:00.920 "flush": true, 00:09:00.920 "reset": true, 00:09:00.920 "nvme_admin": false, 00:09:00.920 "nvme_io": false, 00:09:00.920 "nvme_io_md": false, 00:09:00.920 "write_zeroes": true, 00:09:00.920 "zcopy": true, 00:09:00.920 "get_zone_info": false, 00:09:00.920 "zone_management": false, 00:09:00.920 "zone_append": false, 00:09:00.920 "compare": false, 00:09:00.920 "compare_and_write": false, 00:09:00.920 "abort": true, 00:09:00.920 "seek_hole": false, 00:09:00.920 "seek_data": false, 00:09:00.920 "copy": true, 00:09:00.920 "nvme_iov_md": false 00:09:00.920 }, 00:09:00.920 "memory_domains": [ 00:09:00.920 { 00:09:00.920 "dma_device_id": "system", 00:09:00.920 "dma_device_type": 1 00:09:00.920 }, 00:09:00.920 { 00:09:00.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.920 "dma_device_type": 2 00:09:00.920 } 00:09:00.920 ], 00:09:00.920 "driver_specific": { 00:09:00.920 "passthru": { 00:09:00.920 "name": "pt1", 00:09:00.920 "base_bdev_name": "malloc1" 00:09:00.920 } 00:09:00.920 } 00:09:00.920 }' 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:00.920 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:01.179 "name": "pt2", 00:09:01.179 "aliases": [ 00:09:01.179 "00000000-0000-0000-0000-000000000002" 00:09:01.179 ], 00:09:01.179 "product_name": "passthru", 00:09:01.179 "block_size": 512, 00:09:01.179 "num_blocks": 65536, 00:09:01.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.179 "assigned_rate_limits": { 00:09:01.179 "rw_ios_per_sec": 0, 00:09:01.179 "rw_mbytes_per_sec": 0, 00:09:01.179 "r_mbytes_per_sec": 0, 00:09:01.179 "w_mbytes_per_sec": 0 00:09:01.179 }, 00:09:01.179 "claimed": true, 00:09:01.179 "claim_type": "exclusive_write", 00:09:01.179 "zoned": false, 00:09:01.179 "supported_io_types": { 00:09:01.179 "read": true, 00:09:01.179 "write": true, 00:09:01.179 "unmap": true, 00:09:01.179 "flush": true, 00:09:01.179 "reset": true, 00:09:01.179 "nvme_admin": false, 00:09:01.179 "nvme_io": false, 00:09:01.179 "nvme_io_md": false, 00:09:01.179 "write_zeroes": true, 00:09:01.179 "zcopy": true, 00:09:01.179 "get_zone_info": false, 00:09:01.179 "zone_management": false, 00:09:01.179 "zone_append": false, 00:09:01.179 "compare": false, 00:09:01.179 "compare_and_write": false, 00:09:01.179 "abort": true, 00:09:01.179 "seek_hole": false, 00:09:01.179 "seek_data": false, 00:09:01.179 "copy": true, 00:09:01.179 "nvme_iov_md": false 00:09:01.179 }, 00:09:01.179 "memory_domains": [ 00:09:01.179 { 00:09:01.179 "dma_device_id": "system", 00:09:01.179 "dma_device_type": 1 00:09:01.179 }, 00:09:01.179 { 00:09:01.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.179 "dma_device_type": 2 00:09:01.179 } 00:09:01.179 ], 00:09:01.179 "driver_specific": { 00:09:01.179 "passthru": { 00:09:01.179 "name": "pt2", 00:09:01.179 "base_bdev_name": "malloc2" 00:09:01.179 } 00:09:01.179 } 00:09:01.179 }' 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:01.179 02:33:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:01.438 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:01.438 "name": "pt3", 00:09:01.438 "aliases": [ 00:09:01.438 "00000000-0000-0000-0000-000000000003" 00:09:01.438 ], 00:09:01.438 "product_name": "passthru", 00:09:01.439 "block_size": 512, 00:09:01.439 "num_blocks": 65536, 00:09:01.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.439 "assigned_rate_limits": { 00:09:01.439 "rw_ios_per_sec": 0, 00:09:01.439 "rw_mbytes_per_sec": 0, 00:09:01.439 "r_mbytes_per_sec": 0, 00:09:01.439 "w_mbytes_per_sec": 0 00:09:01.439 }, 00:09:01.439 "claimed": true, 00:09:01.439 "claim_type": "exclusive_write", 00:09:01.439 "zoned": false, 00:09:01.439 "supported_io_types": { 00:09:01.439 "read": true, 00:09:01.439 "write": true, 00:09:01.439 "unmap": true, 00:09:01.439 "flush": true, 00:09:01.439 "reset": true, 00:09:01.439 "nvme_admin": false, 00:09:01.439 "nvme_io": false, 00:09:01.439 "nvme_io_md": false, 00:09:01.439 "write_zeroes": true, 00:09:01.439 "zcopy": true, 00:09:01.439 "get_zone_info": false, 00:09:01.439 "zone_management": false, 00:09:01.439 "zone_append": false, 00:09:01.439 "compare": false, 00:09:01.439 "compare_and_write": false, 00:09:01.439 "abort": true, 00:09:01.439 "seek_hole": false, 00:09:01.439 "seek_data": false, 00:09:01.439 "copy": true, 00:09:01.439 "nvme_iov_md": false 00:09:01.439 }, 00:09:01.439 "memory_domains": [ 00:09:01.439 { 00:09:01.439 "dma_device_id": "system", 00:09:01.439 "dma_device_type": 1 00:09:01.439 }, 00:09:01.439 { 00:09:01.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.439 "dma_device_type": 2 00:09:01.439 } 00:09:01.439 ], 00:09:01.439 "driver_specific": { 00:09:01.439 "passthru": { 00:09:01.439 "name": "pt3", 00:09:01.439 "base_bdev_name": "malloc3" 00:09:01.439 } 00:09:01.439 } 00:09:01.439 }' 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:01.439 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:01.699 [2024-07-25 02:33:48.456712] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4e800bfb-4a2e-11ef-9c8e-7947904e2597 '!=' 4e800bfb-4a2e-11ef-9c8e-7947904e2597 ']' 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53279 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53279 ']' 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53279 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53279 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:01.699 killing process with pid 53279 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53279' 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53279 00:09:01.699 [2024-07-25 02:33:48.498644] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.699 [2024-07-25 02:33:48.498662] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.699 [2024-07-25 02:33:48.498685] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.699 [2024-07-25 02:33:48.498689] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f21b9434780 name raid_bdev1, state offline 00:09:01.699 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53279 00:09:01.699 [2024-07-25 02:33:48.526690] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.959 02:33:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:01.959 00:09:01.959 real 0m9.254s 00:09:01.959 user 0m16.028s 00:09:01.959 sys 0m1.686s 00:09:01.959 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.959 02:33:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.959 ************************************ 00:09:01.959 END TEST raid_superblock_test 00:09:01.959 ************************************ 00:09:01.959 02:33:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:01.959 02:33:48 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:01.959 02:33:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:01.959 02:33:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.959 02:33:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.959 ************************************ 00:09:01.959 START TEST raid_read_error_test 00:09:01.959 ************************************ 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:01.959 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.lZMeiwPBCT 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53622 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53622 /var/tmp/spdk-raid.sock 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53622 ']' 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.219 02:33:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.219 [2024-07-25 02:33:48.886052] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:02.219 [2024-07-25 02:33:48.886345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:02.478 EAL: TSC is not safe to use in SMP mode 00:09:02.478 EAL: TSC is not invariant 00:09:02.478 [2024-07-25 02:33:49.305557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.737 [2024-07-25 02:33:49.397876] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:02.737 [2024-07-25 02:33:49.400026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.737 [2024-07-25 02:33:49.400673] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.737 [2024-07-25 02:33:49.400684] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.997 02:33:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.997 02:33:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:02.997 02:33:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:02.997 02:33:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.256 BaseBdev1_malloc 00:09:03.256 02:33:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:03.256 true 00:09:03.256 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.516 [2024-07-25 02:33:50.323927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.516 [2024-07-25 02:33:50.323983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.516 [2024-07-25 02:33:50.324002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c5e8434780 00:09:03.516 [2024-07-25 02:33:50.324008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.516 [2024-07-25 02:33:50.324460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.516 [2024-07-25 02:33:50.324487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.516 BaseBdev1 00:09:03.516 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:03.516 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.776 BaseBdev2_malloc 00:09:03.776 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:03.776 true 00:09:03.776 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:04.035 [2024-07-25 02:33:50.800092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:04.035 [2024-07-25 02:33:50.800129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.035 [2024-07-25 02:33:50.800150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c5e8434c80 00:09:04.035 [2024-07-25 02:33:50.800155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.035 [2024-07-25 02:33:50.800596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.035 [2024-07-25 02:33:50.800624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:04.035 BaseBdev2 00:09:04.035 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:04.035 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:04.295 BaseBdev3_malloc 00:09:04.295 02:33:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:04.295 true 00:09:04.295 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:04.555 [2024-07-25 02:33:51.340274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:04.555 [2024-07-25 02:33:51.340312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.555 [2024-07-25 02:33:51.340330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c5e8435180 00:09:04.555 [2024-07-25 02:33:51.340336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.555 [2024-07-25 02:33:51.340770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.555 [2024-07-25 02:33:51.340798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:04.555 BaseBdev3 00:09:04.555 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:04.814 [2024-07-25 02:33:51.520340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.814 [2024-07-25 02:33:51.520756] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.814 [2024-07-25 02:33:51.520779] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.814 [2024-07-25 02:33:51.520827] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x14c5e8435400 00:09:04.814 [2024-07-25 02:33:51.520832] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.814 [2024-07-25 02:33:51.520863] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c5e84a0e20 00:09:04.814 [2024-07-25 02:33:51.520911] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14c5e8435400 00:09:04.814 [2024-07-25 02:33:51.520914] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14c5e8435400 00:09:04.814 [2024-07-25 02:33:51.520931] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:04.814 "name": "raid_bdev1", 00:09:04.814 "uuid": "544f3723-4a2e-11ef-9c8e-7947904e2597", 00:09:04.814 "strip_size_kb": 64, 00:09:04.814 "state": "online", 00:09:04.814 "raid_level": "raid0", 00:09:04.814 "superblock": true, 00:09:04.814 "num_base_bdevs": 3, 00:09:04.814 "num_base_bdevs_discovered": 3, 00:09:04.814 "num_base_bdevs_operational": 3, 00:09:04.814 "base_bdevs_list": [ 00:09:04.814 { 00:09:04.814 "name": "BaseBdev1", 00:09:04.814 "uuid": "6f763d85-c362-d95f-9edc-a5966efff59b", 00:09:04.814 "is_configured": true, 00:09:04.814 "data_offset": 2048, 00:09:04.814 "data_size": 63488 00:09:04.814 }, 00:09:04.814 { 00:09:04.814 "name": "BaseBdev2", 00:09:04.814 "uuid": "a90a07aa-eefa-8a5d-ae39-b8986ec7084c", 00:09:04.814 "is_configured": true, 00:09:04.814 "data_offset": 2048, 00:09:04.814 "data_size": 63488 00:09:04.814 }, 00:09:04.814 { 00:09:04.814 "name": "BaseBdev3", 00:09:04.814 "uuid": "0f7cc76f-2561-d15f-bbdf-b86c61600fa7", 00:09:04.814 "is_configured": true, 00:09:04.814 "data_offset": 2048, 00:09:04.814 "data_size": 63488 00:09:04.814 } 00:09:04.814 ] 00:09:04.814 }' 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:04.814 02:33:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.383 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:05.383 02:33:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:05.383 [2024-07-25 02:33:52.068588] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c5e84a0ec0 00:09:06.321 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:06.581 "name": "raid_bdev1", 00:09:06.581 "uuid": "544f3723-4a2e-11ef-9c8e-7947904e2597", 00:09:06.581 "strip_size_kb": 64, 00:09:06.581 "state": "online", 00:09:06.581 "raid_level": "raid0", 00:09:06.581 "superblock": true, 00:09:06.581 "num_base_bdevs": 3, 00:09:06.581 "num_base_bdevs_discovered": 3, 00:09:06.581 "num_base_bdevs_operational": 3, 00:09:06.581 "base_bdevs_list": [ 00:09:06.581 { 00:09:06.581 "name": "BaseBdev1", 00:09:06.581 "uuid": "6f763d85-c362-d95f-9edc-a5966efff59b", 00:09:06.581 "is_configured": true, 00:09:06.581 "data_offset": 2048, 00:09:06.581 "data_size": 63488 00:09:06.581 }, 00:09:06.581 { 00:09:06.581 "name": "BaseBdev2", 00:09:06.581 "uuid": "a90a07aa-eefa-8a5d-ae39-b8986ec7084c", 00:09:06.581 "is_configured": true, 00:09:06.581 "data_offset": 2048, 00:09:06.581 "data_size": 63488 00:09:06.581 }, 00:09:06.581 { 00:09:06.581 "name": "BaseBdev3", 00:09:06.581 "uuid": "0f7cc76f-2561-d15f-bbdf-b86c61600fa7", 00:09:06.581 "is_configured": true, 00:09:06.581 "data_offset": 2048, 00:09:06.581 "data_size": 63488 00:09:06.581 } 00:09:06.581 ] 00:09:06.581 }' 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:06.581 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.849 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:07.135 [2024-07-25 02:33:53.865415] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.135 [2024-07-25 02:33:53.865438] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.135 [2024-07-25 02:33:53.865736] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.135 [2024-07-25 02:33:53.865750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.135 [2024-07-25 02:33:53.865756] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.135 [2024-07-25 02:33:53.865760] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c5e8435400 name raid_bdev1, state offline 00:09:07.135 0 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53622 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53622 ']' 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53622 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53622 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:07.135 killing process with pid 53622 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53622' 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53622 00:09:07.135 [2024-07-25 02:33:53.896111] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.135 02:33:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53622 00:09:07.135 [2024-07-25 02:33:53.909908] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.lZMeiwPBCT 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:07.394 02:33:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:09:07.394 00:09:07.394 real 0m5.224s 00:09:07.394 user 0m7.745s 00:09:07.395 sys 0m0.961s 00:09:07.395 02:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.395 02:33:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.395 ************************************ 00:09:07.395 END TEST raid_read_error_test 00:09:07.395 ************************************ 00:09:07.395 02:33:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:07.395 02:33:54 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:07.395 02:33:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:07.395 02:33:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.395 02:33:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.395 ************************************ 00:09:07.395 START TEST raid_write_error_test 00:09:07.395 ************************************ 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Y028xkDFDY 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53749 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53749 /var/tmp/spdk-raid.sock 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53749 ']' 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.395 02:33:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.395 [2024-07-25 02:33:54.176267] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:07.395 [2024-07-25 02:33:54.176609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:07.961 EAL: TSC is not safe to use in SMP mode 00:09:07.961 EAL: TSC is not invariant 00:09:07.961 [2024-07-25 02:33:54.594489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.961 [2024-07-25 02:33:54.686639] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:07.961 [2024-07-25 02:33:54.688327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.961 [2024-07-25 02:33:54.688950] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.961 [2024-07-25 02:33:54.688962] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.220 02:33:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.220 02:33:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:08.220 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:08.220 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:08.477 BaseBdev1_malloc 00:09:08.477 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:08.477 true 00:09:08.736 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:08.736 [2024-07-25 02:33:55.556010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:08.736 [2024-07-25 02:33:55.556051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.736 [2024-07-25 02:33:55.556090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30aed9634780 00:09:08.736 [2024-07-25 02:33:55.556095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.736 [2024-07-25 02:33:55.556511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.736 [2024-07-25 02:33:55.556536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:08.736 BaseBdev1 00:09:08.736 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:08.736 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:08.994 BaseBdev2_malloc 00:09:08.994 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:09.253 true 00:09:09.253 02:33:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:09.253 [2024-07-25 02:33:56.096172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:09.253 [2024-07-25 02:33:56.096207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.253 [2024-07-25 02:33:56.096226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30aed9634c80 00:09:09.253 [2024-07-25 02:33:56.096232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.253 [2024-07-25 02:33:56.096665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.253 [2024-07-25 02:33:56.096690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:09.253 BaseBdev2 00:09:09.253 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:09.253 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:09.512 BaseBdev3_malloc 00:09:09.512 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:09.772 true 00:09:09.772 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:09.772 [2024-07-25 02:33:56.612325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:09.772 [2024-07-25 02:33:56.612359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.772 [2024-07-25 02:33:56.612380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x30aed9635180 00:09:09.772 [2024-07-25 02:33:56.612386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.772 [2024-07-25 02:33:56.612805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.772 [2024-07-25 02:33:56.612832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:09.772 BaseBdev3 00:09:09.772 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:10.032 [2024-07-25 02:33:56.792389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.032 [2024-07-25 02:33:56.792753] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.032 [2024-07-25 02:33:56.792776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.032 [2024-07-25 02:33:56.792822] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x30aed9635400 00:09:10.032 [2024-07-25 02:33:56.792827] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.032 [2024-07-25 02:33:56.792855] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30aed96a0e20 00:09:10.032 [2024-07-25 02:33:56.792903] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x30aed9635400 00:09:10.032 [2024-07-25 02:33:56.792906] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x30aed9635400 00:09:10.032 [2024-07-25 02:33:56.792923] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.032 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.291 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.291 "name": "raid_bdev1", 00:09:10.291 "uuid": "5773aa8c-4a2e-11ef-9c8e-7947904e2597", 00:09:10.291 "strip_size_kb": 64, 00:09:10.291 "state": "online", 00:09:10.291 "raid_level": "raid0", 00:09:10.291 "superblock": true, 00:09:10.291 "num_base_bdevs": 3, 00:09:10.291 "num_base_bdevs_discovered": 3, 00:09:10.291 "num_base_bdevs_operational": 3, 00:09:10.291 "base_bdevs_list": [ 00:09:10.291 { 00:09:10.291 "name": "BaseBdev1", 00:09:10.291 "uuid": "cf3ba014-8fd0-cc52-b0ba-583034851551", 00:09:10.291 "is_configured": true, 00:09:10.291 "data_offset": 2048, 00:09:10.291 "data_size": 63488 00:09:10.291 }, 00:09:10.291 { 00:09:10.291 "name": "BaseBdev2", 00:09:10.291 "uuid": "0b612eef-96f4-275e-8b1d-1cc609f42f49", 00:09:10.291 "is_configured": true, 00:09:10.291 "data_offset": 2048, 00:09:10.291 "data_size": 63488 00:09:10.291 }, 00:09:10.291 { 00:09:10.291 "name": "BaseBdev3", 00:09:10.291 "uuid": "b8892dd2-1b93-5a5f-8a1f-a07b022114be", 00:09:10.291 "is_configured": true, 00:09:10.291 "data_offset": 2048, 00:09:10.291 "data_size": 63488 00:09:10.291 } 00:09:10.291 ] 00:09:10.291 }' 00:09:10.291 02:33:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.291 02:33:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.551 02:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:10.551 02:33:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:10.551 [2024-07-25 02:33:57.320605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x30aed96a0ec0 00:09:11.489 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.749 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.750 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.009 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:12.009 "name": "raid_bdev1", 00:09:12.009 "uuid": "5773aa8c-4a2e-11ef-9c8e-7947904e2597", 00:09:12.009 "strip_size_kb": 64, 00:09:12.009 "state": "online", 00:09:12.009 "raid_level": "raid0", 00:09:12.009 "superblock": true, 00:09:12.009 "num_base_bdevs": 3, 00:09:12.009 "num_base_bdevs_discovered": 3, 00:09:12.009 "num_base_bdevs_operational": 3, 00:09:12.009 "base_bdevs_list": [ 00:09:12.009 { 00:09:12.009 "name": "BaseBdev1", 00:09:12.009 "uuid": "cf3ba014-8fd0-cc52-b0ba-583034851551", 00:09:12.009 "is_configured": true, 00:09:12.009 "data_offset": 2048, 00:09:12.009 "data_size": 63488 00:09:12.009 }, 00:09:12.009 { 00:09:12.009 "name": "BaseBdev2", 00:09:12.009 "uuid": "0b612eef-96f4-275e-8b1d-1cc609f42f49", 00:09:12.009 "is_configured": true, 00:09:12.009 "data_offset": 2048, 00:09:12.009 "data_size": 63488 00:09:12.009 }, 00:09:12.009 { 00:09:12.009 "name": "BaseBdev3", 00:09:12.009 "uuid": "b8892dd2-1b93-5a5f-8a1f-a07b022114be", 00:09:12.009 "is_configured": true, 00:09:12.009 "data_offset": 2048, 00:09:12.009 "data_size": 63488 00:09:12.009 } 00:09:12.009 ] 00:09:12.009 }' 00:09:12.009 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:12.009 02:33:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.269 02:33:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:12.269 [2024-07-25 02:33:59.129157] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.269 [2024-07-25 02:33:59.129182] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.269 [2024-07-25 02:33:59.129453] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.269 [2024-07-25 02:33:59.129465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.269 [2024-07-25 02:33:59.129471] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.269 [2024-07-25 02:33:59.129475] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x30aed9635400 name raid_bdev1, state offline 00:09:12.269 0 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53749 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53749 ']' 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53749 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:12.269 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53749 00:09:12.270 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:12.270 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:12.270 killing process with pid 53749 00:09:12.270 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53749' 00:09:12.270 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53749 00:09:12.270 [2024-07-25 02:33:59.161527] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.270 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53749 00:09:12.270 [2024-07-25 02:33:59.175304] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Y028xkDFDY 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.55 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.55 != \0\.\0\0 ]] 00:09:12.530 00:09:12.530 real 0m5.203s 00:09:12.530 user 0m7.760s 00:09:12.530 sys 0m0.924s 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.530 02:33:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.530 ************************************ 00:09:12.530 END TEST raid_write_error_test 00:09:12.530 ************************************ 00:09:12.530 02:33:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:12.530 02:33:59 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:12.530 02:33:59 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:12.530 02:33:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:12.530 02:33:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.530 02:33:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.530 ************************************ 00:09:12.530 START TEST raid_state_function_test 00:09:12.530 ************************************ 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=53874 00:09:12.530 Process raid pid: 53874 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 53874' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 53874 /var/tmp/spdk-raid.sock 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 53874 ']' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.530 02:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.790 [2024-07-25 02:33:59.440987] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:12.790 [2024-07-25 02:33:59.441328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:13.049 EAL: TSC is not safe to use in SMP mode 00:09:13.050 EAL: TSC is not invariant 00:09:13.050 [2024-07-25 02:33:59.859973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.050 [2024-07-25 02:33:59.952409] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:13.050 [2024-07-25 02:33:59.954095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.050 [2024-07-25 02:33:59.954722] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.050 [2024-07-25 02:33:59.954734] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.618 02:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:13.619 [2024-07-25 02:34:00.501802] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.619 [2024-07-25 02:34:00.501837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.619 [2024-07-25 02:34:00.501841] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.619 [2024-07-25 02:34:00.501846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.619 [2024-07-25 02:34:00.501849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.619 [2024-07-25 02:34:00.501854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.619 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.878 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.878 "name": "Existed_Raid", 00:09:13.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.878 "strip_size_kb": 64, 00:09:13.878 "state": "configuring", 00:09:13.878 "raid_level": "concat", 00:09:13.878 "superblock": false, 00:09:13.878 "num_base_bdevs": 3, 00:09:13.878 "num_base_bdevs_discovered": 0, 00:09:13.878 "num_base_bdevs_operational": 3, 00:09:13.878 "base_bdevs_list": [ 00:09:13.878 { 00:09:13.878 "name": "BaseBdev1", 00:09:13.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.878 "is_configured": false, 00:09:13.878 "data_offset": 0, 00:09:13.878 "data_size": 0 00:09:13.878 }, 00:09:13.878 { 00:09:13.878 "name": "BaseBdev2", 00:09:13.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.878 "is_configured": false, 00:09:13.878 "data_offset": 0, 00:09:13.878 "data_size": 0 00:09:13.878 }, 00:09:13.878 { 00:09:13.878 "name": "BaseBdev3", 00:09:13.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.878 "is_configured": false, 00:09:13.878 "data_offset": 0, 00:09:13.878 "data_size": 0 00:09:13.878 } 00:09:13.878 ] 00:09:13.878 }' 00:09:13.878 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.878 02:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.138 02:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:14.398 [2024-07-25 02:34:01.121955] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.398 [2024-07-25 02:34:01.121971] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3eb5f0634500 name Existed_Raid, state configuring 00:09:14.398 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:14.398 [2024-07-25 02:34:01.306012] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.398 [2024-07-25 02:34:01.306042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.398 [2024-07-25 02:34:01.306045] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.398 [2024-07-25 02:34:01.306050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.398 [2024-07-25 02:34:01.306053] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.398 [2024-07-25 02:34:01.306058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.658 [2024-07-25 02:34:01.490868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.658 BaseBdev1 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:14.658 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:14.917 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.176 [ 00:09:15.176 { 00:09:15.176 "name": "BaseBdev1", 00:09:15.176 "aliases": [ 00:09:15.176 "5a4079e0-4a2e-11ef-9c8e-7947904e2597" 00:09:15.176 ], 00:09:15.176 "product_name": "Malloc disk", 00:09:15.176 "block_size": 512, 00:09:15.176 "num_blocks": 65536, 00:09:15.176 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:15.176 "assigned_rate_limits": { 00:09:15.176 "rw_ios_per_sec": 0, 00:09:15.176 "rw_mbytes_per_sec": 0, 00:09:15.176 "r_mbytes_per_sec": 0, 00:09:15.176 "w_mbytes_per_sec": 0 00:09:15.176 }, 00:09:15.176 "claimed": true, 00:09:15.176 "claim_type": "exclusive_write", 00:09:15.176 "zoned": false, 00:09:15.176 "supported_io_types": { 00:09:15.176 "read": true, 00:09:15.176 "write": true, 00:09:15.176 "unmap": true, 00:09:15.176 "flush": true, 00:09:15.176 "reset": true, 00:09:15.176 "nvme_admin": false, 00:09:15.176 "nvme_io": false, 00:09:15.176 "nvme_io_md": false, 00:09:15.176 "write_zeroes": true, 00:09:15.176 "zcopy": true, 00:09:15.176 "get_zone_info": false, 00:09:15.176 "zone_management": false, 00:09:15.176 "zone_append": false, 00:09:15.176 "compare": false, 00:09:15.176 "compare_and_write": false, 00:09:15.176 "abort": true, 00:09:15.176 "seek_hole": false, 00:09:15.176 "seek_data": false, 00:09:15.176 "copy": true, 00:09:15.176 "nvme_iov_md": false 00:09:15.176 }, 00:09:15.176 "memory_domains": [ 00:09:15.176 { 00:09:15.176 "dma_device_id": "system", 00:09:15.176 "dma_device_type": 1 00:09:15.176 }, 00:09:15.176 { 00:09:15.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.177 "dma_device_type": 2 00:09:15.177 } 00:09:15.177 ], 00:09:15.177 "driver_specific": {} 00:09:15.177 } 00:09:15.177 ] 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.177 02:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.177 "name": "Existed_Raid", 00:09:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.177 "strip_size_kb": 64, 00:09:15.177 "state": "configuring", 00:09:15.177 "raid_level": "concat", 00:09:15.177 "superblock": false, 00:09:15.177 "num_base_bdevs": 3, 00:09:15.177 "num_base_bdevs_discovered": 1, 00:09:15.177 "num_base_bdevs_operational": 3, 00:09:15.177 "base_bdevs_list": [ 00:09:15.177 { 00:09:15.177 "name": "BaseBdev1", 00:09:15.177 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:15.177 "is_configured": true, 00:09:15.177 "data_offset": 0, 00:09:15.177 "data_size": 65536 00:09:15.177 }, 00:09:15.177 { 00:09:15.177 "name": "BaseBdev2", 00:09:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.177 "is_configured": false, 00:09:15.177 "data_offset": 0, 00:09:15.177 "data_size": 0 00:09:15.177 }, 00:09:15.177 { 00:09:15.177 "name": "BaseBdev3", 00:09:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.177 "is_configured": false, 00:09:15.177 "data_offset": 0, 00:09:15.177 "data_size": 0 00:09:15.177 } 00:09:15.177 ] 00:09:15.177 }' 00:09:15.177 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.177 02:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.436 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.696 [2024-07-25 02:34:02.438328] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.696 [2024-07-25 02:34:02.438346] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3eb5f0634500 name Existed_Raid, state configuring 00:09:15.696 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:15.696 [2024-07-25 02:34:02.594379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.696 [2024-07-25 02:34:02.595098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.696 [2024-07-25 02:34:02.595131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.696 [2024-07-25 02:34:02.595135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.696 [2024-07-25 02:34:02.595141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.955 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.955 "name": "Existed_Raid", 00:09:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.956 "strip_size_kb": 64, 00:09:15.956 "state": "configuring", 00:09:15.956 "raid_level": "concat", 00:09:15.956 "superblock": false, 00:09:15.956 "num_base_bdevs": 3, 00:09:15.956 "num_base_bdevs_discovered": 1, 00:09:15.956 "num_base_bdevs_operational": 3, 00:09:15.956 "base_bdevs_list": [ 00:09:15.956 { 00:09:15.956 "name": "BaseBdev1", 00:09:15.956 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:15.956 "is_configured": true, 00:09:15.956 "data_offset": 0, 00:09:15.956 "data_size": 65536 00:09:15.956 }, 00:09:15.956 { 00:09:15.956 "name": "BaseBdev2", 00:09:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.956 "is_configured": false, 00:09:15.956 "data_offset": 0, 00:09:15.956 "data_size": 0 00:09:15.956 }, 00:09:15.956 { 00:09:15.956 "name": "BaseBdev3", 00:09:15.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.956 "is_configured": false, 00:09:15.956 "data_offset": 0, 00:09:15.956 "data_size": 0 00:09:15.956 } 00:09:15.956 ] 00:09:15.956 }' 00:09:15.956 02:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.956 02:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.215 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.475 [2024-07-25 02:34:03.202650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.475 BaseBdev2 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:16.475 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:16.733 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.734 [ 00:09:16.734 { 00:09:16.734 "name": "BaseBdev2", 00:09:16.734 "aliases": [ 00:09:16.734 "5b45c801-4a2e-11ef-9c8e-7947904e2597" 00:09:16.734 ], 00:09:16.734 "product_name": "Malloc disk", 00:09:16.734 "block_size": 512, 00:09:16.734 "num_blocks": 65536, 00:09:16.734 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:16.734 "assigned_rate_limits": { 00:09:16.734 "rw_ios_per_sec": 0, 00:09:16.734 "rw_mbytes_per_sec": 0, 00:09:16.734 "r_mbytes_per_sec": 0, 00:09:16.734 "w_mbytes_per_sec": 0 00:09:16.734 }, 00:09:16.734 "claimed": true, 00:09:16.734 "claim_type": "exclusive_write", 00:09:16.734 "zoned": false, 00:09:16.734 "supported_io_types": { 00:09:16.734 "read": true, 00:09:16.734 "write": true, 00:09:16.734 "unmap": true, 00:09:16.734 "flush": true, 00:09:16.734 "reset": true, 00:09:16.734 "nvme_admin": false, 00:09:16.734 "nvme_io": false, 00:09:16.734 "nvme_io_md": false, 00:09:16.734 "write_zeroes": true, 00:09:16.734 "zcopy": true, 00:09:16.734 "get_zone_info": false, 00:09:16.734 "zone_management": false, 00:09:16.734 "zone_append": false, 00:09:16.734 "compare": false, 00:09:16.734 "compare_and_write": false, 00:09:16.734 "abort": true, 00:09:16.734 "seek_hole": false, 00:09:16.734 "seek_data": false, 00:09:16.734 "copy": true, 00:09:16.734 "nvme_iov_md": false 00:09:16.734 }, 00:09:16.734 "memory_domains": [ 00:09:16.734 { 00:09:16.734 "dma_device_id": "system", 00:09:16.734 "dma_device_type": 1 00:09:16.734 }, 00:09:16.734 { 00:09:16.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.734 "dma_device_type": 2 00:09:16.734 } 00:09:16.734 ], 00:09:16.734 "driver_specific": {} 00:09:16.734 } 00:09:16.734 ] 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.734 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.003 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.003 "name": "Existed_Raid", 00:09:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.003 "strip_size_kb": 64, 00:09:17.003 "state": "configuring", 00:09:17.003 "raid_level": "concat", 00:09:17.003 "superblock": false, 00:09:17.003 "num_base_bdevs": 3, 00:09:17.003 "num_base_bdevs_discovered": 2, 00:09:17.003 "num_base_bdevs_operational": 3, 00:09:17.003 "base_bdevs_list": [ 00:09:17.003 { 00:09:17.003 "name": "BaseBdev1", 00:09:17.003 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:17.003 "is_configured": true, 00:09:17.003 "data_offset": 0, 00:09:17.003 "data_size": 65536 00:09:17.003 }, 00:09:17.003 { 00:09:17.003 "name": "BaseBdev2", 00:09:17.003 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:17.003 "is_configured": true, 00:09:17.003 "data_offset": 0, 00:09:17.003 "data_size": 65536 00:09:17.003 }, 00:09:17.003 { 00:09:17.003 "name": "BaseBdev3", 00:09:17.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.003 "is_configured": false, 00:09:17.003 "data_offset": 0, 00:09:17.003 "data_size": 0 00:09:17.003 } 00:09:17.003 ] 00:09:17.003 }' 00:09:17.003 02:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.003 02:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.277 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.535 [2024-07-25 02:34:04.198900] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.535 [2024-07-25 02:34:04.198919] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3eb5f0634a00 00:09:17.535 [2024-07-25 02:34:04.198922] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:17.535 [2024-07-25 02:34:04.198953] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3eb5f0697e20 00:09:17.535 [2024-07-25 02:34:04.199025] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3eb5f0634a00 00:09:17.535 [2024-07-25 02:34:04.199028] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3eb5f0634a00 00:09:17.535 [2024-07-25 02:34:04.199052] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.535 BaseBdev3 00:09:17.535 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:17.535 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.536 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.795 [ 00:09:17.795 { 00:09:17.795 "name": "BaseBdev3", 00:09:17.795 "aliases": [ 00:09:17.795 "5bddcc8c-4a2e-11ef-9c8e-7947904e2597" 00:09:17.795 ], 00:09:17.795 "product_name": "Malloc disk", 00:09:17.795 "block_size": 512, 00:09:17.795 "num_blocks": 65536, 00:09:17.795 "uuid": "5bddcc8c-4a2e-11ef-9c8e-7947904e2597", 00:09:17.795 "assigned_rate_limits": { 00:09:17.795 "rw_ios_per_sec": 0, 00:09:17.795 "rw_mbytes_per_sec": 0, 00:09:17.795 "r_mbytes_per_sec": 0, 00:09:17.795 "w_mbytes_per_sec": 0 00:09:17.795 }, 00:09:17.795 "claimed": true, 00:09:17.795 "claim_type": "exclusive_write", 00:09:17.795 "zoned": false, 00:09:17.795 "supported_io_types": { 00:09:17.795 "read": true, 00:09:17.795 "write": true, 00:09:17.795 "unmap": true, 00:09:17.795 "flush": true, 00:09:17.795 "reset": true, 00:09:17.795 "nvme_admin": false, 00:09:17.795 "nvme_io": false, 00:09:17.795 "nvme_io_md": false, 00:09:17.795 "write_zeroes": true, 00:09:17.795 "zcopy": true, 00:09:17.795 "get_zone_info": false, 00:09:17.795 "zone_management": false, 00:09:17.795 "zone_append": false, 00:09:17.795 "compare": false, 00:09:17.795 "compare_and_write": false, 00:09:17.795 "abort": true, 00:09:17.795 "seek_hole": false, 00:09:17.795 "seek_data": false, 00:09:17.795 "copy": true, 00:09:17.795 "nvme_iov_md": false 00:09:17.795 }, 00:09:17.795 "memory_domains": [ 00:09:17.795 { 00:09:17.795 "dma_device_id": "system", 00:09:17.795 "dma_device_type": 1 00:09:17.795 }, 00:09:17.795 { 00:09:17.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.795 "dma_device_type": 2 00:09:17.795 } 00:09:17.795 ], 00:09:17.795 "driver_specific": {} 00:09:17.795 } 00:09:17.795 ] 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.795 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.055 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.055 "name": "Existed_Raid", 00:09:18.055 "uuid": "5bddd086-4a2e-11ef-9c8e-7947904e2597", 00:09:18.055 "strip_size_kb": 64, 00:09:18.055 "state": "online", 00:09:18.055 "raid_level": "concat", 00:09:18.055 "superblock": false, 00:09:18.055 "num_base_bdevs": 3, 00:09:18.055 "num_base_bdevs_discovered": 3, 00:09:18.055 "num_base_bdevs_operational": 3, 00:09:18.055 "base_bdevs_list": [ 00:09:18.055 { 00:09:18.055 "name": "BaseBdev1", 00:09:18.055 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:18.055 "is_configured": true, 00:09:18.055 "data_offset": 0, 00:09:18.055 "data_size": 65536 00:09:18.055 }, 00:09:18.055 { 00:09:18.055 "name": "BaseBdev2", 00:09:18.055 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:18.055 "is_configured": true, 00:09:18.055 "data_offset": 0, 00:09:18.055 "data_size": 65536 00:09:18.055 }, 00:09:18.055 { 00:09:18.055 "name": "BaseBdev3", 00:09:18.055 "uuid": "5bddcc8c-4a2e-11ef-9c8e-7947904e2597", 00:09:18.055 "is_configured": true, 00:09:18.055 "data_offset": 0, 00:09:18.055 "data_size": 65536 00:09:18.055 } 00:09:18.055 ] 00:09:18.055 }' 00:09:18.055 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.055 02:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:18.315 02:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:18.315 [2024-07-25 02:34:05.155074] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:18.315 "name": "Existed_Raid", 00:09:18.315 "aliases": [ 00:09:18.315 "5bddd086-4a2e-11ef-9c8e-7947904e2597" 00:09:18.315 ], 00:09:18.315 "product_name": "Raid Volume", 00:09:18.315 "block_size": 512, 00:09:18.315 "num_blocks": 196608, 00:09:18.315 "uuid": "5bddd086-4a2e-11ef-9c8e-7947904e2597", 00:09:18.315 "assigned_rate_limits": { 00:09:18.315 "rw_ios_per_sec": 0, 00:09:18.315 "rw_mbytes_per_sec": 0, 00:09:18.315 "r_mbytes_per_sec": 0, 00:09:18.315 "w_mbytes_per_sec": 0 00:09:18.315 }, 00:09:18.315 "claimed": false, 00:09:18.315 "zoned": false, 00:09:18.315 "supported_io_types": { 00:09:18.315 "read": true, 00:09:18.315 "write": true, 00:09:18.315 "unmap": true, 00:09:18.315 "flush": true, 00:09:18.315 "reset": true, 00:09:18.315 "nvme_admin": false, 00:09:18.315 "nvme_io": false, 00:09:18.315 "nvme_io_md": false, 00:09:18.315 "write_zeroes": true, 00:09:18.315 "zcopy": false, 00:09:18.315 "get_zone_info": false, 00:09:18.315 "zone_management": false, 00:09:18.315 "zone_append": false, 00:09:18.315 "compare": false, 00:09:18.315 "compare_and_write": false, 00:09:18.315 "abort": false, 00:09:18.315 "seek_hole": false, 00:09:18.315 "seek_data": false, 00:09:18.315 "copy": false, 00:09:18.315 "nvme_iov_md": false 00:09:18.315 }, 00:09:18.315 "memory_domains": [ 00:09:18.315 { 00:09:18.315 "dma_device_id": "system", 00:09:18.315 "dma_device_type": 1 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.315 "dma_device_type": 2 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "dma_device_id": "system", 00:09:18.315 "dma_device_type": 1 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.315 "dma_device_type": 2 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "dma_device_id": "system", 00:09:18.315 "dma_device_type": 1 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.315 "dma_device_type": 2 00:09:18.315 } 00:09:18.315 ], 00:09:18.315 "driver_specific": { 00:09:18.315 "raid": { 00:09:18.315 "uuid": "5bddd086-4a2e-11ef-9c8e-7947904e2597", 00:09:18.315 "strip_size_kb": 64, 00:09:18.315 "state": "online", 00:09:18.315 "raid_level": "concat", 00:09:18.315 "superblock": false, 00:09:18.315 "num_base_bdevs": 3, 00:09:18.315 "num_base_bdevs_discovered": 3, 00:09:18.315 "num_base_bdevs_operational": 3, 00:09:18.315 "base_bdevs_list": [ 00:09:18.315 { 00:09:18.315 "name": "BaseBdev1", 00:09:18.315 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:18.315 "is_configured": true, 00:09:18.315 "data_offset": 0, 00:09:18.315 "data_size": 65536 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "name": "BaseBdev2", 00:09:18.315 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:18.315 "is_configured": true, 00:09:18.315 "data_offset": 0, 00:09:18.315 "data_size": 65536 00:09:18.315 }, 00:09:18.315 { 00:09:18.315 "name": "BaseBdev3", 00:09:18.315 "uuid": "5bddcc8c-4a2e-11ef-9c8e-7947904e2597", 00:09:18.315 "is_configured": true, 00:09:18.315 "data_offset": 0, 00:09:18.315 "data_size": 65536 00:09:18.315 } 00:09:18.315 ] 00:09:18.315 } 00:09:18.315 } 00:09:18.315 }' 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:18.315 BaseBdev2 00:09:18.315 BaseBdev3' 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:18.315 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:18.575 "name": "BaseBdev1", 00:09:18.575 "aliases": [ 00:09:18.575 "5a4079e0-4a2e-11ef-9c8e-7947904e2597" 00:09:18.575 ], 00:09:18.575 "product_name": "Malloc disk", 00:09:18.575 "block_size": 512, 00:09:18.575 "num_blocks": 65536, 00:09:18.575 "uuid": "5a4079e0-4a2e-11ef-9c8e-7947904e2597", 00:09:18.575 "assigned_rate_limits": { 00:09:18.575 "rw_ios_per_sec": 0, 00:09:18.575 "rw_mbytes_per_sec": 0, 00:09:18.575 "r_mbytes_per_sec": 0, 00:09:18.575 "w_mbytes_per_sec": 0 00:09:18.575 }, 00:09:18.575 "claimed": true, 00:09:18.575 "claim_type": "exclusive_write", 00:09:18.575 "zoned": false, 00:09:18.575 "supported_io_types": { 00:09:18.575 "read": true, 00:09:18.575 "write": true, 00:09:18.575 "unmap": true, 00:09:18.575 "flush": true, 00:09:18.575 "reset": true, 00:09:18.575 "nvme_admin": false, 00:09:18.575 "nvme_io": false, 00:09:18.575 "nvme_io_md": false, 00:09:18.575 "write_zeroes": true, 00:09:18.575 "zcopy": true, 00:09:18.575 "get_zone_info": false, 00:09:18.575 "zone_management": false, 00:09:18.575 "zone_append": false, 00:09:18.575 "compare": false, 00:09:18.575 "compare_and_write": false, 00:09:18.575 "abort": true, 00:09:18.575 "seek_hole": false, 00:09:18.575 "seek_data": false, 00:09:18.575 "copy": true, 00:09:18.575 "nvme_iov_md": false 00:09:18.575 }, 00:09:18.575 "memory_domains": [ 00:09:18.575 { 00:09:18.575 "dma_device_id": "system", 00:09:18.575 "dma_device_type": 1 00:09:18.575 }, 00:09:18.575 { 00:09:18.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.575 "dma_device_type": 2 00:09:18.575 } 00:09:18.575 ], 00:09:18.575 "driver_specific": {} 00:09:18.575 }' 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.575 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:18.576 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:18.835 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:18.835 "name": "BaseBdev2", 00:09:18.835 "aliases": [ 00:09:18.835 "5b45c801-4a2e-11ef-9c8e-7947904e2597" 00:09:18.835 ], 00:09:18.835 "product_name": "Malloc disk", 00:09:18.835 "block_size": 512, 00:09:18.835 "num_blocks": 65536, 00:09:18.835 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:18.835 "assigned_rate_limits": { 00:09:18.835 "rw_ios_per_sec": 0, 00:09:18.835 "rw_mbytes_per_sec": 0, 00:09:18.835 "r_mbytes_per_sec": 0, 00:09:18.835 "w_mbytes_per_sec": 0 00:09:18.835 }, 00:09:18.835 "claimed": true, 00:09:18.835 "claim_type": "exclusive_write", 00:09:18.835 "zoned": false, 00:09:18.835 "supported_io_types": { 00:09:18.835 "read": true, 00:09:18.835 "write": true, 00:09:18.835 "unmap": true, 00:09:18.835 "flush": true, 00:09:18.835 "reset": true, 00:09:18.835 "nvme_admin": false, 00:09:18.835 "nvme_io": false, 00:09:18.835 "nvme_io_md": false, 00:09:18.835 "write_zeroes": true, 00:09:18.835 "zcopy": true, 00:09:18.835 "get_zone_info": false, 00:09:18.835 "zone_management": false, 00:09:18.835 "zone_append": false, 00:09:18.835 "compare": false, 00:09:18.835 "compare_and_write": false, 00:09:18.835 "abort": true, 00:09:18.835 "seek_hole": false, 00:09:18.835 "seek_data": false, 00:09:18.835 "copy": true, 00:09:18.835 "nvme_iov_md": false 00:09:18.835 }, 00:09:18.835 "memory_domains": [ 00:09:18.835 { 00:09:18.835 "dma_device_id": "system", 00:09:18.835 "dma_device_type": 1 00:09:18.835 }, 00:09:18.835 { 00:09:18.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.835 "dma_device_type": 2 00:09:18.835 } 00:09:18.835 ], 00:09:18.836 "driver_specific": {} 00:09:18.836 }' 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:18.836 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:19.095 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:19.095 "name": "BaseBdev3", 00:09:19.095 "aliases": [ 00:09:19.095 "5bddcc8c-4a2e-11ef-9c8e-7947904e2597" 00:09:19.095 ], 00:09:19.095 "product_name": "Malloc disk", 00:09:19.095 "block_size": 512, 00:09:19.095 "num_blocks": 65536, 00:09:19.095 "uuid": "5bddcc8c-4a2e-11ef-9c8e-7947904e2597", 00:09:19.095 "assigned_rate_limits": { 00:09:19.095 "rw_ios_per_sec": 0, 00:09:19.095 "rw_mbytes_per_sec": 0, 00:09:19.095 "r_mbytes_per_sec": 0, 00:09:19.095 "w_mbytes_per_sec": 0 00:09:19.095 }, 00:09:19.095 "claimed": true, 00:09:19.095 "claim_type": "exclusive_write", 00:09:19.095 "zoned": false, 00:09:19.095 "supported_io_types": { 00:09:19.095 "read": true, 00:09:19.095 "write": true, 00:09:19.095 "unmap": true, 00:09:19.095 "flush": true, 00:09:19.095 "reset": true, 00:09:19.095 "nvme_admin": false, 00:09:19.095 "nvme_io": false, 00:09:19.095 "nvme_io_md": false, 00:09:19.095 "write_zeroes": true, 00:09:19.095 "zcopy": true, 00:09:19.095 "get_zone_info": false, 00:09:19.095 "zone_management": false, 00:09:19.095 "zone_append": false, 00:09:19.095 "compare": false, 00:09:19.095 "compare_and_write": false, 00:09:19.095 "abort": true, 00:09:19.095 "seek_hole": false, 00:09:19.095 "seek_data": false, 00:09:19.095 "copy": true, 00:09:19.095 "nvme_iov_md": false 00:09:19.095 }, 00:09:19.095 "memory_domains": [ 00:09:19.095 { 00:09:19.095 "dma_device_id": "system", 00:09:19.095 "dma_device_type": 1 00:09:19.095 }, 00:09:19.095 { 00:09:19.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.095 "dma_device_type": 2 00:09:19.095 } 00:09:19.095 ], 00:09:19.095 "driver_specific": {} 00:09:19.095 }' 00:09:19.095 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.095 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.095 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:19.095 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.096 02:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.354 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:19.354 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:19.354 [2024-07-25 02:34:06.171351] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.354 [2024-07-25 02:34:06.171365] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.354 [2024-07-25 02:34:06.171374] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.354 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:19.354 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:09:19.354 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.355 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.614 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.614 "name": "Existed_Raid", 00:09:19.614 "uuid": "5bddd086-4a2e-11ef-9c8e-7947904e2597", 00:09:19.614 "strip_size_kb": 64, 00:09:19.614 "state": "offline", 00:09:19.614 "raid_level": "concat", 00:09:19.614 "superblock": false, 00:09:19.614 "num_base_bdevs": 3, 00:09:19.614 "num_base_bdevs_discovered": 2, 00:09:19.614 "num_base_bdevs_operational": 2, 00:09:19.614 "base_bdevs_list": [ 00:09:19.614 { 00:09:19.614 "name": null, 00:09:19.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.614 "is_configured": false, 00:09:19.614 "data_offset": 0, 00:09:19.614 "data_size": 65536 00:09:19.614 }, 00:09:19.614 { 00:09:19.614 "name": "BaseBdev2", 00:09:19.614 "uuid": "5b45c801-4a2e-11ef-9c8e-7947904e2597", 00:09:19.614 "is_configured": true, 00:09:19.614 "data_offset": 0, 00:09:19.614 "data_size": 65536 00:09:19.614 }, 00:09:19.614 { 00:09:19.614 "name": "BaseBdev3", 00:09:19.614 "uuid": "5bddcc8c-4a2e-11ef-9c8e-7947904e2597", 00:09:19.614 "is_configured": true, 00:09:19.614 "data_offset": 0, 00:09:19.614 "data_size": 65536 00:09:19.614 } 00:09:19.614 ] 00:09:19.614 }' 00:09:19.614 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.614 02:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.873 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:19.873 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:19.873 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.873 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:20.133 [2024-07-25 02:34:06.968212] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.133 02:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:20.392 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:20.392 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.392 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:20.650 [2024-07-25 02:34:07.328941] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.650 [2024-07-25 02:34:07.328958] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3eb5f0634a00 name Existed_Raid, state offline 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:20.650 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:20.651 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:20.651 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:20.651 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:20.651 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.909 BaseBdev2 00:09:20.909 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:20.910 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:21.169 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.169 [ 00:09:21.169 { 00:09:21.169 "name": "BaseBdev2", 00:09:21.169 "aliases": [ 00:09:21.169 "5dec5dac-4a2e-11ef-9c8e-7947904e2597" 00:09:21.169 ], 00:09:21.169 "product_name": "Malloc disk", 00:09:21.169 "block_size": 512, 00:09:21.169 "num_blocks": 65536, 00:09:21.169 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:21.169 "assigned_rate_limits": { 00:09:21.169 "rw_ios_per_sec": 0, 00:09:21.169 "rw_mbytes_per_sec": 0, 00:09:21.169 "r_mbytes_per_sec": 0, 00:09:21.169 "w_mbytes_per_sec": 0 00:09:21.169 }, 00:09:21.169 "claimed": false, 00:09:21.169 "zoned": false, 00:09:21.169 "supported_io_types": { 00:09:21.169 "read": true, 00:09:21.169 "write": true, 00:09:21.169 "unmap": true, 00:09:21.169 "flush": true, 00:09:21.169 "reset": true, 00:09:21.169 "nvme_admin": false, 00:09:21.169 "nvme_io": false, 00:09:21.169 "nvme_io_md": false, 00:09:21.169 "write_zeroes": true, 00:09:21.169 "zcopy": true, 00:09:21.169 "get_zone_info": false, 00:09:21.169 "zone_management": false, 00:09:21.169 "zone_append": false, 00:09:21.169 "compare": false, 00:09:21.169 "compare_and_write": false, 00:09:21.169 "abort": true, 00:09:21.169 "seek_hole": false, 00:09:21.169 "seek_data": false, 00:09:21.169 "copy": true, 00:09:21.169 "nvme_iov_md": false 00:09:21.169 }, 00:09:21.169 "memory_domains": [ 00:09:21.169 { 00:09:21.169 "dma_device_id": "system", 00:09:21.169 "dma_device_type": 1 00:09:21.169 }, 00:09:21.169 { 00:09:21.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.169 "dma_device_type": 2 00:09:21.169 } 00:09:21.169 ], 00:09:21.169 "driver_specific": {} 00:09:21.169 } 00:09:21.169 ] 00:09:21.169 02:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:21.169 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:21.169 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:21.169 02:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.429 BaseBdev3 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:21.429 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:21.688 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.688 [ 00:09:21.688 { 00:09:21.688 "name": "BaseBdev3", 00:09:21.688 "aliases": [ 00:09:21.688 "5e38ad5e-4a2e-11ef-9c8e-7947904e2597" 00:09:21.688 ], 00:09:21.688 "product_name": "Malloc disk", 00:09:21.688 "block_size": 512, 00:09:21.688 "num_blocks": 65536, 00:09:21.688 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:21.688 "assigned_rate_limits": { 00:09:21.688 "rw_ios_per_sec": 0, 00:09:21.688 "rw_mbytes_per_sec": 0, 00:09:21.688 "r_mbytes_per_sec": 0, 00:09:21.688 "w_mbytes_per_sec": 0 00:09:21.688 }, 00:09:21.688 "claimed": false, 00:09:21.688 "zoned": false, 00:09:21.688 "supported_io_types": { 00:09:21.688 "read": true, 00:09:21.688 "write": true, 00:09:21.688 "unmap": true, 00:09:21.688 "flush": true, 00:09:21.688 "reset": true, 00:09:21.688 "nvme_admin": false, 00:09:21.688 "nvme_io": false, 00:09:21.688 "nvme_io_md": false, 00:09:21.688 "write_zeroes": true, 00:09:21.688 "zcopy": true, 00:09:21.688 "get_zone_info": false, 00:09:21.688 "zone_management": false, 00:09:21.688 "zone_append": false, 00:09:21.688 "compare": false, 00:09:21.688 "compare_and_write": false, 00:09:21.688 "abort": true, 00:09:21.688 "seek_hole": false, 00:09:21.688 "seek_data": false, 00:09:21.688 "copy": true, 00:09:21.688 "nvme_iov_md": false 00:09:21.688 }, 00:09:21.688 "memory_domains": [ 00:09:21.688 { 00:09:21.688 "dma_device_id": "system", 00:09:21.688 "dma_device_type": 1 00:09:21.688 }, 00:09:21.688 { 00:09:21.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.688 "dma_device_type": 2 00:09:21.688 } 00:09:21.688 ], 00:09:21.688 "driver_specific": {} 00:09:21.688 } 00:09:21.688 ] 00:09:21.688 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:21.688 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:21.688 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:21.688 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:21.948 [2024-07-25 02:34:08.693971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.948 [2024-07-25 02:34:08.694006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.948 [2024-07-25 02:34:08.694010] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.948 [2024-07-25 02:34:08.694484] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.948 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.207 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:22.207 "name": "Existed_Raid", 00:09:22.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.207 "strip_size_kb": 64, 00:09:22.207 "state": "configuring", 00:09:22.207 "raid_level": "concat", 00:09:22.207 "superblock": false, 00:09:22.207 "num_base_bdevs": 3, 00:09:22.207 "num_base_bdevs_discovered": 2, 00:09:22.207 "num_base_bdevs_operational": 3, 00:09:22.207 "base_bdevs_list": [ 00:09:22.207 { 00:09:22.207 "name": "BaseBdev1", 00:09:22.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.207 "is_configured": false, 00:09:22.207 "data_offset": 0, 00:09:22.207 "data_size": 0 00:09:22.207 }, 00:09:22.207 { 00:09:22.207 "name": "BaseBdev2", 00:09:22.207 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:22.207 "is_configured": true, 00:09:22.207 "data_offset": 0, 00:09:22.207 "data_size": 65536 00:09:22.207 }, 00:09:22.207 { 00:09:22.207 "name": "BaseBdev3", 00:09:22.207 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:22.207 "is_configured": true, 00:09:22.207 "data_offset": 0, 00:09:22.207 "data_size": 65536 00:09:22.207 } 00:09:22.207 ] 00:09:22.207 }' 00:09:22.207 02:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:22.207 02:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:22.466 [2024-07-25 02:34:09.322111] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.466 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.726 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:22.726 "name": "Existed_Raid", 00:09:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.726 "strip_size_kb": 64, 00:09:22.726 "state": "configuring", 00:09:22.726 "raid_level": "concat", 00:09:22.726 "superblock": false, 00:09:22.726 "num_base_bdevs": 3, 00:09:22.726 "num_base_bdevs_discovered": 1, 00:09:22.726 "num_base_bdevs_operational": 3, 00:09:22.726 "base_bdevs_list": [ 00:09:22.726 { 00:09:22.726 "name": "BaseBdev1", 00:09:22.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.726 "is_configured": false, 00:09:22.726 "data_offset": 0, 00:09:22.726 "data_size": 0 00:09:22.726 }, 00:09:22.726 { 00:09:22.726 "name": null, 00:09:22.726 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:22.726 "is_configured": false, 00:09:22.726 "data_offset": 0, 00:09:22.726 "data_size": 65536 00:09:22.726 }, 00:09:22.726 { 00:09:22.726 "name": "BaseBdev3", 00:09:22.726 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:22.726 "is_configured": true, 00:09:22.726 "data_offset": 0, 00:09:22.726 "data_size": 65536 00:09:22.726 } 00:09:22.726 ] 00:09:22.726 }' 00:09:22.726 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:22.726 02:34:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.986 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.986 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.245 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:23.245 02:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.245 [2024-07-25 02:34:10.138396] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.245 BaseBdev1 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:23.245 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:23.504 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.763 [ 00:09:23.763 { 00:09:23.763 "name": "BaseBdev1", 00:09:23.763 "aliases": [ 00:09:23.763 "5f6817a4-4a2e-11ef-9c8e-7947904e2597" 00:09:23.763 ], 00:09:23.763 "product_name": "Malloc disk", 00:09:23.763 "block_size": 512, 00:09:23.763 "num_blocks": 65536, 00:09:23.763 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:23.763 "assigned_rate_limits": { 00:09:23.763 "rw_ios_per_sec": 0, 00:09:23.763 "rw_mbytes_per_sec": 0, 00:09:23.763 "r_mbytes_per_sec": 0, 00:09:23.763 "w_mbytes_per_sec": 0 00:09:23.763 }, 00:09:23.763 "claimed": true, 00:09:23.763 "claim_type": "exclusive_write", 00:09:23.763 "zoned": false, 00:09:23.763 "supported_io_types": { 00:09:23.763 "read": true, 00:09:23.763 "write": true, 00:09:23.763 "unmap": true, 00:09:23.763 "flush": true, 00:09:23.763 "reset": true, 00:09:23.763 "nvme_admin": false, 00:09:23.763 "nvme_io": false, 00:09:23.763 "nvme_io_md": false, 00:09:23.763 "write_zeroes": true, 00:09:23.763 "zcopy": true, 00:09:23.763 "get_zone_info": false, 00:09:23.763 "zone_management": false, 00:09:23.763 "zone_append": false, 00:09:23.763 "compare": false, 00:09:23.763 "compare_and_write": false, 00:09:23.763 "abort": true, 00:09:23.763 "seek_hole": false, 00:09:23.763 "seek_data": false, 00:09:23.763 "copy": true, 00:09:23.763 "nvme_iov_md": false 00:09:23.763 }, 00:09:23.763 "memory_domains": [ 00:09:23.763 { 00:09:23.763 "dma_device_id": "system", 00:09:23.763 "dma_device_type": 1 00:09:23.763 }, 00:09:23.763 { 00:09:23.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.763 "dma_device_type": 2 00:09:23.763 } 00:09:23.763 ], 00:09:23.763 "driver_specific": {} 00:09:23.763 } 00:09:23.763 ] 00:09:23.763 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:23.763 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.763 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:23.763 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.764 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.022 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.022 "name": "Existed_Raid", 00:09:24.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.022 "strip_size_kb": 64, 00:09:24.022 "state": "configuring", 00:09:24.022 "raid_level": "concat", 00:09:24.022 "superblock": false, 00:09:24.022 "num_base_bdevs": 3, 00:09:24.022 "num_base_bdevs_discovered": 2, 00:09:24.022 "num_base_bdevs_operational": 3, 00:09:24.022 "base_bdevs_list": [ 00:09:24.022 { 00:09:24.022 "name": "BaseBdev1", 00:09:24.022 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:24.022 "is_configured": true, 00:09:24.022 "data_offset": 0, 00:09:24.022 "data_size": 65536 00:09:24.022 }, 00:09:24.022 { 00:09:24.022 "name": null, 00:09:24.022 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:24.022 "is_configured": false, 00:09:24.022 "data_offset": 0, 00:09:24.022 "data_size": 65536 00:09:24.022 }, 00:09:24.022 { 00:09:24.022 "name": "BaseBdev3", 00:09:24.022 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:24.022 "is_configured": true, 00:09:24.022 "data_offset": 0, 00:09:24.022 "data_size": 65536 00:09:24.022 } 00:09:24.022 ] 00:09:24.022 }' 00:09:24.022 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.022 02:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.279 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.279 02:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.279 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:24.279 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:24.538 [2024-07-25 02:34:11.266546] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.538 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.797 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.797 "name": "Existed_Raid", 00:09:24.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.797 "strip_size_kb": 64, 00:09:24.797 "state": "configuring", 00:09:24.797 "raid_level": "concat", 00:09:24.797 "superblock": false, 00:09:24.797 "num_base_bdevs": 3, 00:09:24.797 "num_base_bdevs_discovered": 1, 00:09:24.797 "num_base_bdevs_operational": 3, 00:09:24.797 "base_bdevs_list": [ 00:09:24.797 { 00:09:24.797 "name": "BaseBdev1", 00:09:24.797 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:24.797 "is_configured": true, 00:09:24.797 "data_offset": 0, 00:09:24.797 "data_size": 65536 00:09:24.797 }, 00:09:24.797 { 00:09:24.797 "name": null, 00:09:24.797 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:24.797 "is_configured": false, 00:09:24.797 "data_offset": 0, 00:09:24.797 "data_size": 65536 00:09:24.797 }, 00:09:24.797 { 00:09:24.797 "name": null, 00:09:24.797 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:24.797 "is_configured": false, 00:09:24.797 "data_offset": 0, 00:09:24.797 "data_size": 65536 00:09:24.797 } 00:09:24.797 ] 00:09:24.797 }' 00:09:24.797 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.797 02:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.055 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.055 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.055 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:25.055 02:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.314 [2024-07-25 02:34:12.082740] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.314 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.573 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.573 "name": "Existed_Raid", 00:09:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.573 "strip_size_kb": 64, 00:09:25.573 "state": "configuring", 00:09:25.573 "raid_level": "concat", 00:09:25.573 "superblock": false, 00:09:25.573 "num_base_bdevs": 3, 00:09:25.573 "num_base_bdevs_discovered": 2, 00:09:25.573 "num_base_bdevs_operational": 3, 00:09:25.573 "base_bdevs_list": [ 00:09:25.573 { 00:09:25.573 "name": "BaseBdev1", 00:09:25.573 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:25.573 "is_configured": true, 00:09:25.573 "data_offset": 0, 00:09:25.573 "data_size": 65536 00:09:25.573 }, 00:09:25.573 { 00:09:25.573 "name": null, 00:09:25.573 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:25.573 "is_configured": false, 00:09:25.573 "data_offset": 0, 00:09:25.573 "data_size": 65536 00:09:25.573 }, 00:09:25.573 { 00:09:25.573 "name": "BaseBdev3", 00:09:25.573 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:25.573 "is_configured": true, 00:09:25.573 "data_offset": 0, 00:09:25.573 "data_size": 65536 00:09:25.573 } 00:09:25.573 ] 00:09:25.573 }' 00:09:25.573 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.573 02:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.832 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.832 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.832 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:25.832 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:26.091 [2024-07-25 02:34:12.898920] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.091 02:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.350 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:26.350 "name": "Existed_Raid", 00:09:26.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.350 "strip_size_kb": 64, 00:09:26.350 "state": "configuring", 00:09:26.350 "raid_level": "concat", 00:09:26.350 "superblock": false, 00:09:26.350 "num_base_bdevs": 3, 00:09:26.350 "num_base_bdevs_discovered": 1, 00:09:26.350 "num_base_bdevs_operational": 3, 00:09:26.350 "base_bdevs_list": [ 00:09:26.350 { 00:09:26.350 "name": null, 00:09:26.350 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:26.350 "is_configured": false, 00:09:26.350 "data_offset": 0, 00:09:26.350 "data_size": 65536 00:09:26.350 }, 00:09:26.350 { 00:09:26.350 "name": null, 00:09:26.350 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:26.350 "is_configured": false, 00:09:26.350 "data_offset": 0, 00:09:26.350 "data_size": 65536 00:09:26.350 }, 00:09:26.350 { 00:09:26.350 "name": "BaseBdev3", 00:09:26.350 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:26.350 "is_configured": true, 00:09:26.350 "data_offset": 0, 00:09:26.350 "data_size": 65536 00:09:26.350 } 00:09:26.350 ] 00:09:26.350 }' 00:09:26.350 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:26.350 02:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.608 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.608 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.867 [2024-07-25 02:34:13.699758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.867 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.125 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.125 "name": "Existed_Raid", 00:09:27.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.125 "strip_size_kb": 64, 00:09:27.125 "state": "configuring", 00:09:27.125 "raid_level": "concat", 00:09:27.125 "superblock": false, 00:09:27.125 "num_base_bdevs": 3, 00:09:27.125 "num_base_bdevs_discovered": 2, 00:09:27.125 "num_base_bdevs_operational": 3, 00:09:27.125 "base_bdevs_list": [ 00:09:27.125 { 00:09:27.125 "name": null, 00:09:27.125 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:27.125 "is_configured": false, 00:09:27.125 "data_offset": 0, 00:09:27.125 "data_size": 65536 00:09:27.125 }, 00:09:27.125 { 00:09:27.125 "name": "BaseBdev2", 00:09:27.125 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:27.125 "is_configured": true, 00:09:27.125 "data_offset": 0, 00:09:27.125 "data_size": 65536 00:09:27.125 }, 00:09:27.125 { 00:09:27.125 "name": "BaseBdev3", 00:09:27.125 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:27.125 "is_configured": true, 00:09:27.125 "data_offset": 0, 00:09:27.125 "data_size": 65536 00:09:27.125 } 00:09:27.125 ] 00:09:27.125 }' 00:09:27.125 02:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.125 02:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.383 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.383 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.641 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:27.641 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.641 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:27.642 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5f6817a4-4a2e-11ef-9c8e-7947904e2597 00:09:27.900 [2024-07-25 02:34:14.696054] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:27.900 [2024-07-25 02:34:14.696071] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3eb5f0634a00 00:09:27.900 [2024-07-25 02:34:14.696074] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:27.900 [2024-07-25 02:34:14.696091] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3eb5f0697e20 00:09:27.900 [2024-07-25 02:34:14.696139] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3eb5f0634a00 00:09:27.900 [2024-07-25 02:34:14.696142] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3eb5f0634a00 00:09:27.900 [2024-07-25 02:34:14.696165] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.900 NewBaseBdev 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:27.900 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.159 02:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.159 [ 00:09:28.159 { 00:09:28.159 "name": "NewBaseBdev", 00:09:28.159 "aliases": [ 00:09:28.159 "5f6817a4-4a2e-11ef-9c8e-7947904e2597" 00:09:28.159 ], 00:09:28.159 "product_name": "Malloc disk", 00:09:28.159 "block_size": 512, 00:09:28.159 "num_blocks": 65536, 00:09:28.159 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:28.159 "assigned_rate_limits": { 00:09:28.159 "rw_ios_per_sec": 0, 00:09:28.159 "rw_mbytes_per_sec": 0, 00:09:28.159 "r_mbytes_per_sec": 0, 00:09:28.159 "w_mbytes_per_sec": 0 00:09:28.159 }, 00:09:28.159 "claimed": true, 00:09:28.159 "claim_type": "exclusive_write", 00:09:28.159 "zoned": false, 00:09:28.159 "supported_io_types": { 00:09:28.159 "read": true, 00:09:28.159 "write": true, 00:09:28.159 "unmap": true, 00:09:28.159 "flush": true, 00:09:28.159 "reset": true, 00:09:28.159 "nvme_admin": false, 00:09:28.159 "nvme_io": false, 00:09:28.159 "nvme_io_md": false, 00:09:28.159 "write_zeroes": true, 00:09:28.159 "zcopy": true, 00:09:28.159 "get_zone_info": false, 00:09:28.159 "zone_management": false, 00:09:28.159 "zone_append": false, 00:09:28.159 "compare": false, 00:09:28.159 "compare_and_write": false, 00:09:28.159 "abort": true, 00:09:28.159 "seek_hole": false, 00:09:28.159 "seek_data": false, 00:09:28.159 "copy": true, 00:09:28.159 "nvme_iov_md": false 00:09:28.159 }, 00:09:28.159 "memory_domains": [ 00:09:28.159 { 00:09:28.159 "dma_device_id": "system", 00:09:28.159 "dma_device_type": 1 00:09:28.159 }, 00:09:28.159 { 00:09:28.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.159 "dma_device_type": 2 00:09:28.159 } 00:09:28.159 ], 00:09:28.159 "driver_specific": {} 00:09:28.159 } 00:09:28.159 ] 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.417 "name": "Existed_Raid", 00:09:28.417 "uuid": "621f8d94-4a2e-11ef-9c8e-7947904e2597", 00:09:28.417 "strip_size_kb": 64, 00:09:28.417 "state": "online", 00:09:28.417 "raid_level": "concat", 00:09:28.417 "superblock": false, 00:09:28.417 "num_base_bdevs": 3, 00:09:28.417 "num_base_bdevs_discovered": 3, 00:09:28.417 "num_base_bdevs_operational": 3, 00:09:28.417 "base_bdevs_list": [ 00:09:28.417 { 00:09:28.417 "name": "NewBaseBdev", 00:09:28.417 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 0, 00:09:28.417 "data_size": 65536 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "name": "BaseBdev2", 00:09:28.417 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 0, 00:09:28.417 "data_size": 65536 00:09:28.417 }, 00:09:28.417 { 00:09:28.417 "name": "BaseBdev3", 00:09:28.417 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:28.417 "is_configured": true, 00:09:28.417 "data_offset": 0, 00:09:28.417 "data_size": 65536 00:09:28.417 } 00:09:28.417 ] 00:09:28.417 }' 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.417 02:34:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:28.698 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:28.997 [2024-07-25 02:34:15.704197] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.997 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:28.997 "name": "Existed_Raid", 00:09:28.997 "aliases": [ 00:09:28.997 "621f8d94-4a2e-11ef-9c8e-7947904e2597" 00:09:28.997 ], 00:09:28.997 "product_name": "Raid Volume", 00:09:28.997 "block_size": 512, 00:09:28.997 "num_blocks": 196608, 00:09:28.997 "uuid": "621f8d94-4a2e-11ef-9c8e-7947904e2597", 00:09:28.997 "assigned_rate_limits": { 00:09:28.997 "rw_ios_per_sec": 0, 00:09:28.997 "rw_mbytes_per_sec": 0, 00:09:28.997 "r_mbytes_per_sec": 0, 00:09:28.997 "w_mbytes_per_sec": 0 00:09:28.997 }, 00:09:28.997 "claimed": false, 00:09:28.997 "zoned": false, 00:09:28.997 "supported_io_types": { 00:09:28.997 "read": true, 00:09:28.997 "write": true, 00:09:28.997 "unmap": true, 00:09:28.997 "flush": true, 00:09:28.997 "reset": true, 00:09:28.997 "nvme_admin": false, 00:09:28.997 "nvme_io": false, 00:09:28.997 "nvme_io_md": false, 00:09:28.997 "write_zeroes": true, 00:09:28.997 "zcopy": false, 00:09:28.997 "get_zone_info": false, 00:09:28.997 "zone_management": false, 00:09:28.997 "zone_append": false, 00:09:28.997 "compare": false, 00:09:28.997 "compare_and_write": false, 00:09:28.997 "abort": false, 00:09:28.997 "seek_hole": false, 00:09:28.997 "seek_data": false, 00:09:28.997 "copy": false, 00:09:28.997 "nvme_iov_md": false 00:09:28.997 }, 00:09:28.997 "memory_domains": [ 00:09:28.997 { 00:09:28.997 "dma_device_id": "system", 00:09:28.997 "dma_device_type": 1 00:09:28.997 }, 00:09:28.997 { 00:09:28.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.997 "dma_device_type": 2 00:09:28.997 }, 00:09:28.997 { 00:09:28.997 "dma_device_id": "system", 00:09:28.997 "dma_device_type": 1 00:09:28.997 }, 00:09:28.997 { 00:09:28.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.997 "dma_device_type": 2 00:09:28.997 }, 00:09:28.997 { 00:09:28.997 "dma_device_id": "system", 00:09:28.997 "dma_device_type": 1 00:09:28.997 }, 00:09:28.997 { 00:09:28.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.997 "dma_device_type": 2 00:09:28.997 } 00:09:28.997 ], 00:09:28.998 "driver_specific": { 00:09:28.998 "raid": { 00:09:28.998 "uuid": "621f8d94-4a2e-11ef-9c8e-7947904e2597", 00:09:28.998 "strip_size_kb": 64, 00:09:28.998 "state": "online", 00:09:28.998 "raid_level": "concat", 00:09:28.998 "superblock": false, 00:09:28.998 "num_base_bdevs": 3, 00:09:28.998 "num_base_bdevs_discovered": 3, 00:09:28.998 "num_base_bdevs_operational": 3, 00:09:28.998 "base_bdevs_list": [ 00:09:28.998 { 00:09:28.998 "name": "NewBaseBdev", 00:09:28.998 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:28.998 "is_configured": true, 00:09:28.998 "data_offset": 0, 00:09:28.998 "data_size": 65536 00:09:28.998 }, 00:09:28.998 { 00:09:28.998 "name": "BaseBdev2", 00:09:28.998 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:28.998 "is_configured": true, 00:09:28.998 "data_offset": 0, 00:09:28.998 "data_size": 65536 00:09:28.998 }, 00:09:28.998 { 00:09:28.998 "name": "BaseBdev3", 00:09:28.998 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:28.998 "is_configured": true, 00:09:28.998 "data_offset": 0, 00:09:28.998 "data_size": 65536 00:09:28.998 } 00:09:28.998 ] 00:09:28.998 } 00:09:28.998 } 00:09:28.998 }' 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:28.998 BaseBdev2 00:09:28.998 BaseBdev3' 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.998 "name": "NewBaseBdev", 00:09:28.998 "aliases": [ 00:09:28.998 "5f6817a4-4a2e-11ef-9c8e-7947904e2597" 00:09:28.998 ], 00:09:28.998 "product_name": "Malloc disk", 00:09:28.998 "block_size": 512, 00:09:28.998 "num_blocks": 65536, 00:09:28.998 "uuid": "5f6817a4-4a2e-11ef-9c8e-7947904e2597", 00:09:28.998 "assigned_rate_limits": { 00:09:28.998 "rw_ios_per_sec": 0, 00:09:28.998 "rw_mbytes_per_sec": 0, 00:09:28.998 "r_mbytes_per_sec": 0, 00:09:28.998 "w_mbytes_per_sec": 0 00:09:28.998 }, 00:09:28.998 "claimed": true, 00:09:28.998 "claim_type": "exclusive_write", 00:09:28.998 "zoned": false, 00:09:28.998 "supported_io_types": { 00:09:28.998 "read": true, 00:09:28.998 "write": true, 00:09:28.998 "unmap": true, 00:09:28.998 "flush": true, 00:09:28.998 "reset": true, 00:09:28.998 "nvme_admin": false, 00:09:28.998 "nvme_io": false, 00:09:28.998 "nvme_io_md": false, 00:09:28.998 "write_zeroes": true, 00:09:28.998 "zcopy": true, 00:09:28.998 "get_zone_info": false, 00:09:28.998 "zone_management": false, 00:09:28.998 "zone_append": false, 00:09:28.998 "compare": false, 00:09:28.998 "compare_and_write": false, 00:09:28.998 "abort": true, 00:09:28.998 "seek_hole": false, 00:09:28.998 "seek_data": false, 00:09:28.998 "copy": true, 00:09:28.998 "nvme_iov_md": false 00:09:28.998 }, 00:09:28.998 "memory_domains": [ 00:09:28.998 { 00:09:28.998 "dma_device_id": "system", 00:09:28.998 "dma_device_type": 1 00:09:28.998 }, 00:09:28.998 { 00:09:28.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.998 "dma_device_type": 2 00:09:28.998 } 00:09:28.998 ], 00:09:28.998 "driver_specific": {} 00:09:28.998 }' 00:09:28.998 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.267 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:29.268 02:34:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:29.268 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:29.268 "name": "BaseBdev2", 00:09:29.268 "aliases": [ 00:09:29.268 "5dec5dac-4a2e-11ef-9c8e-7947904e2597" 00:09:29.268 ], 00:09:29.268 "product_name": "Malloc disk", 00:09:29.268 "block_size": 512, 00:09:29.268 "num_blocks": 65536, 00:09:29.268 "uuid": "5dec5dac-4a2e-11ef-9c8e-7947904e2597", 00:09:29.268 "assigned_rate_limits": { 00:09:29.268 "rw_ios_per_sec": 0, 00:09:29.268 "rw_mbytes_per_sec": 0, 00:09:29.268 "r_mbytes_per_sec": 0, 00:09:29.268 "w_mbytes_per_sec": 0 00:09:29.268 }, 00:09:29.268 "claimed": true, 00:09:29.268 "claim_type": "exclusive_write", 00:09:29.268 "zoned": false, 00:09:29.268 "supported_io_types": { 00:09:29.268 "read": true, 00:09:29.268 "write": true, 00:09:29.268 "unmap": true, 00:09:29.268 "flush": true, 00:09:29.268 "reset": true, 00:09:29.268 "nvme_admin": false, 00:09:29.268 "nvme_io": false, 00:09:29.268 "nvme_io_md": false, 00:09:29.268 "write_zeroes": true, 00:09:29.268 "zcopy": true, 00:09:29.268 "get_zone_info": false, 00:09:29.268 "zone_management": false, 00:09:29.268 "zone_append": false, 00:09:29.268 "compare": false, 00:09:29.268 "compare_and_write": false, 00:09:29.268 "abort": true, 00:09:29.268 "seek_hole": false, 00:09:29.268 "seek_data": false, 00:09:29.268 "copy": true, 00:09:29.268 "nvme_iov_md": false 00:09:29.268 }, 00:09:29.268 "memory_domains": [ 00:09:29.268 { 00:09:29.268 "dma_device_id": "system", 00:09:29.268 "dma_device_type": 1 00:09:29.268 }, 00:09:29.268 { 00:09:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.268 "dma_device_type": 2 00:09:29.268 } 00:09:29.268 ], 00:09:29.268 "driver_specific": {} 00:09:29.268 }' 00:09:29.268 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:29.528 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:29.788 "name": "BaseBdev3", 00:09:29.788 "aliases": [ 00:09:29.788 "5e38ad5e-4a2e-11ef-9c8e-7947904e2597" 00:09:29.788 ], 00:09:29.788 "product_name": "Malloc disk", 00:09:29.788 "block_size": 512, 00:09:29.788 "num_blocks": 65536, 00:09:29.788 "uuid": "5e38ad5e-4a2e-11ef-9c8e-7947904e2597", 00:09:29.788 "assigned_rate_limits": { 00:09:29.788 "rw_ios_per_sec": 0, 00:09:29.788 "rw_mbytes_per_sec": 0, 00:09:29.788 "r_mbytes_per_sec": 0, 00:09:29.788 "w_mbytes_per_sec": 0 00:09:29.788 }, 00:09:29.788 "claimed": true, 00:09:29.788 "claim_type": "exclusive_write", 00:09:29.788 "zoned": false, 00:09:29.788 "supported_io_types": { 00:09:29.788 "read": true, 00:09:29.788 "write": true, 00:09:29.788 "unmap": true, 00:09:29.788 "flush": true, 00:09:29.788 "reset": true, 00:09:29.788 "nvme_admin": false, 00:09:29.788 "nvme_io": false, 00:09:29.788 "nvme_io_md": false, 00:09:29.788 "write_zeroes": true, 00:09:29.788 "zcopy": true, 00:09:29.788 "get_zone_info": false, 00:09:29.788 "zone_management": false, 00:09:29.788 "zone_append": false, 00:09:29.788 "compare": false, 00:09:29.788 "compare_and_write": false, 00:09:29.788 "abort": true, 00:09:29.788 "seek_hole": false, 00:09:29.788 "seek_data": false, 00:09:29.788 "copy": true, 00:09:29.788 "nvme_iov_md": false 00:09:29.788 }, 00:09:29.788 "memory_domains": [ 00:09:29.788 { 00:09:29.788 "dma_device_id": "system", 00:09:29.788 "dma_device_type": 1 00:09:29.788 }, 00:09:29.788 { 00:09:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.788 "dma_device_type": 2 00:09:29.788 } 00:09:29.788 ], 00:09:29.788 "driver_specific": {} 00:09:29.788 }' 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:29.788 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:30.049 [2024-07-25 02:34:16.704381] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.049 [2024-07-25 02:34:16.704393] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.049 [2024-07-25 02:34:16.704407] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.049 [2024-07-25 02:34:16.704418] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.049 [2024-07-25 02:34:16.704421] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3eb5f0634a00 name Existed_Raid, state offline 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 53874 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 53874 ']' 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 53874 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 53874 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:30.049 killing process with pid 53874 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53874' 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 53874 00:09:30.049 [2024-07-25 02:34:16.734776] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 53874 00:09:30.049 [2024-07-25 02:34:16.748629] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:30.049 00:09:30.049 real 0m17.493s 00:09:30.049 user 0m31.589s 00:09:30.049 sys 0m2.817s 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.049 02:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.049 ************************************ 00:09:30.049 END TEST raid_state_function_test 00:09:30.049 ************************************ 00:09:30.309 02:34:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:30.309 02:34:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:30.309 02:34:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:30.309 02:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.309 02:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.309 ************************************ 00:09:30.309 START TEST raid_state_function_test_sb 00:09:30.309 ************************************ 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:30.309 Process raid pid: 54575 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54575 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54575' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54575 /var/tmp/spdk-raid.sock 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54575 ']' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.309 02:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.309 [2024-07-25 02:34:17.000693] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:30.309 [2024-07-25 02:34:17.001031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:30.569 EAL: TSC is not safe to use in SMP mode 00:09:30.569 EAL: TSC is not invariant 00:09:30.569 [2024-07-25 02:34:17.420474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.828 [2024-07-25 02:34:17.512264] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:30.828 [2024-07-25 02:34:17.513929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.828 [2024-07-25 02:34:17.514489] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.828 [2024-07-25 02:34:17.514500] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.088 02:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.088 02:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:31.088 02:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:31.348 [2024-07-25 02:34:18.065485] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.348 [2024-07-25 02:34:18.065516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.348 [2024-07-25 02:34:18.065520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.348 [2024-07-25 02:34:18.065542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.348 [2024-07-25 02:34:18.065544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.348 [2024-07-25 02:34:18.065550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.348 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.607 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.607 "name": "Existed_Raid", 00:09:31.607 "uuid": "6421aedf-4a2e-11ef-9c8e-7947904e2597", 00:09:31.607 "strip_size_kb": 64, 00:09:31.607 "state": "configuring", 00:09:31.607 "raid_level": "concat", 00:09:31.607 "superblock": true, 00:09:31.607 "num_base_bdevs": 3, 00:09:31.607 "num_base_bdevs_discovered": 0, 00:09:31.607 "num_base_bdevs_operational": 3, 00:09:31.607 "base_bdevs_list": [ 00:09:31.607 { 00:09:31.607 "name": "BaseBdev1", 00:09:31.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.607 "is_configured": false, 00:09:31.607 "data_offset": 0, 00:09:31.607 "data_size": 0 00:09:31.607 }, 00:09:31.608 { 00:09:31.608 "name": "BaseBdev2", 00:09:31.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.608 "is_configured": false, 00:09:31.608 "data_offset": 0, 00:09:31.608 "data_size": 0 00:09:31.608 }, 00:09:31.608 { 00:09:31.608 "name": "BaseBdev3", 00:09:31.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.608 "is_configured": false, 00:09:31.608 "data_offset": 0, 00:09:31.608 "data_size": 0 00:09:31.608 } 00:09:31.608 ] 00:09:31.608 }' 00:09:31.608 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.608 02:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.866 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:31.866 [2024-07-25 02:34:18.697583] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.866 [2024-07-25 02:34:18.697597] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x353e21034500 name Existed_Raid, state configuring 00:09:31.867 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:32.125 [2024-07-25 02:34:18.877624] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.125 [2024-07-25 02:34:18.877650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.125 [2024-07-25 02:34:18.877653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.125 [2024-07-25 02:34:18.877659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.125 [2024-07-25 02:34:18.877661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.125 [2024-07-25 02:34:18.877667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.125 02:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.385 [2024-07-25 02:34:19.058476] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.385 BaseBdev1 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:32.385 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.645 [ 00:09:32.645 { 00:09:32.645 "name": "BaseBdev1", 00:09:32.645 "aliases": [ 00:09:32.645 "64b913ef-4a2e-11ef-9c8e-7947904e2597" 00:09:32.645 ], 00:09:32.645 "product_name": "Malloc disk", 00:09:32.645 "block_size": 512, 00:09:32.645 "num_blocks": 65536, 00:09:32.645 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:32.645 "assigned_rate_limits": { 00:09:32.645 "rw_ios_per_sec": 0, 00:09:32.645 "rw_mbytes_per_sec": 0, 00:09:32.645 "r_mbytes_per_sec": 0, 00:09:32.645 "w_mbytes_per_sec": 0 00:09:32.645 }, 00:09:32.645 "claimed": true, 00:09:32.645 "claim_type": "exclusive_write", 00:09:32.645 "zoned": false, 00:09:32.645 "supported_io_types": { 00:09:32.645 "read": true, 00:09:32.645 "write": true, 00:09:32.645 "unmap": true, 00:09:32.645 "flush": true, 00:09:32.645 "reset": true, 00:09:32.645 "nvme_admin": false, 00:09:32.645 "nvme_io": false, 00:09:32.645 "nvme_io_md": false, 00:09:32.645 "write_zeroes": true, 00:09:32.645 "zcopy": true, 00:09:32.645 "get_zone_info": false, 00:09:32.645 "zone_management": false, 00:09:32.645 "zone_append": false, 00:09:32.645 "compare": false, 00:09:32.646 "compare_and_write": false, 00:09:32.646 "abort": true, 00:09:32.646 "seek_hole": false, 00:09:32.646 "seek_data": false, 00:09:32.646 "copy": true, 00:09:32.646 "nvme_iov_md": false 00:09:32.646 }, 00:09:32.646 "memory_domains": [ 00:09:32.646 { 00:09:32.646 "dma_device_id": "system", 00:09:32.646 "dma_device_type": 1 00:09:32.646 }, 00:09:32.646 { 00:09:32.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.646 "dma_device_type": 2 00:09:32.646 } 00:09:32.646 ], 00:09:32.646 "driver_specific": {} 00:09:32.646 } 00:09:32.646 ] 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.646 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.905 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.905 "name": "Existed_Raid", 00:09:32.905 "uuid": "649d9b23-4a2e-11ef-9c8e-7947904e2597", 00:09:32.905 "strip_size_kb": 64, 00:09:32.905 "state": "configuring", 00:09:32.906 "raid_level": "concat", 00:09:32.906 "superblock": true, 00:09:32.906 "num_base_bdevs": 3, 00:09:32.906 "num_base_bdevs_discovered": 1, 00:09:32.906 "num_base_bdevs_operational": 3, 00:09:32.906 "base_bdevs_list": [ 00:09:32.906 { 00:09:32.906 "name": "BaseBdev1", 00:09:32.906 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:32.906 "is_configured": true, 00:09:32.906 "data_offset": 2048, 00:09:32.906 "data_size": 63488 00:09:32.906 }, 00:09:32.906 { 00:09:32.906 "name": "BaseBdev2", 00:09:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.906 "is_configured": false, 00:09:32.906 "data_offset": 0, 00:09:32.906 "data_size": 0 00:09:32.906 }, 00:09:32.906 { 00:09:32.906 "name": "BaseBdev3", 00:09:32.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.906 "is_configured": false, 00:09:32.906 "data_offset": 0, 00:09:32.906 "data_size": 0 00:09:32.906 } 00:09:32.906 ] 00:09:32.906 }' 00:09:32.906 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.906 02:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.165 02:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:33.165 [2024-07-25 02:34:20.037838] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.165 [2024-07-25 02:34:20.037854] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x353e21034500 name Existed_Raid, state configuring 00:09:33.165 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:33.424 [2024-07-25 02:34:20.217882] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.424 [2024-07-25 02:34:20.218533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.424 [2024-07-25 02:34:20.218569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.424 [2024-07-25 02:34:20.218572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.424 [2024-07-25 02:34:20.218578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.424 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:33.424 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:33.424 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.425 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.684 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:33.684 "name": "Existed_Raid", 00:09:33.684 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:33.684 "strip_size_kb": 64, 00:09:33.684 "state": "configuring", 00:09:33.684 "raid_level": "concat", 00:09:33.684 "superblock": true, 00:09:33.684 "num_base_bdevs": 3, 00:09:33.684 "num_base_bdevs_discovered": 1, 00:09:33.684 "num_base_bdevs_operational": 3, 00:09:33.684 "base_bdevs_list": [ 00:09:33.684 { 00:09:33.684 "name": "BaseBdev1", 00:09:33.684 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:33.684 "is_configured": true, 00:09:33.684 "data_offset": 2048, 00:09:33.684 "data_size": 63488 00:09:33.684 }, 00:09:33.684 { 00:09:33.684 "name": "BaseBdev2", 00:09:33.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.684 "is_configured": false, 00:09:33.684 "data_offset": 0, 00:09:33.684 "data_size": 0 00:09:33.684 }, 00:09:33.684 { 00:09:33.684 "name": "BaseBdev3", 00:09:33.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.684 "is_configured": false, 00:09:33.684 "data_offset": 0, 00:09:33.684 "data_size": 0 00:09:33.684 } 00:09:33.684 ] 00:09:33.684 }' 00:09:33.684 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:33.684 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.944 [2024-07-25 02:34:20.834088] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.944 BaseBdev2 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:33.944 02:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:34.202 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.461 [ 00:09:34.461 { 00:09:34.461 "name": "BaseBdev2", 00:09:34.461 "aliases": [ 00:09:34.461 "65c82016-4a2e-11ef-9c8e-7947904e2597" 00:09:34.461 ], 00:09:34.461 "product_name": "Malloc disk", 00:09:34.461 "block_size": 512, 00:09:34.461 "num_blocks": 65536, 00:09:34.461 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:34.461 "assigned_rate_limits": { 00:09:34.461 "rw_ios_per_sec": 0, 00:09:34.461 "rw_mbytes_per_sec": 0, 00:09:34.461 "r_mbytes_per_sec": 0, 00:09:34.461 "w_mbytes_per_sec": 0 00:09:34.461 }, 00:09:34.461 "claimed": true, 00:09:34.461 "claim_type": "exclusive_write", 00:09:34.461 "zoned": false, 00:09:34.461 "supported_io_types": { 00:09:34.461 "read": true, 00:09:34.461 "write": true, 00:09:34.461 "unmap": true, 00:09:34.461 "flush": true, 00:09:34.461 "reset": true, 00:09:34.461 "nvme_admin": false, 00:09:34.461 "nvme_io": false, 00:09:34.461 "nvme_io_md": false, 00:09:34.461 "write_zeroes": true, 00:09:34.461 "zcopy": true, 00:09:34.461 "get_zone_info": false, 00:09:34.461 "zone_management": false, 00:09:34.461 "zone_append": false, 00:09:34.461 "compare": false, 00:09:34.461 "compare_and_write": false, 00:09:34.461 "abort": true, 00:09:34.461 "seek_hole": false, 00:09:34.461 "seek_data": false, 00:09:34.461 "copy": true, 00:09:34.461 "nvme_iov_md": false 00:09:34.461 }, 00:09:34.461 "memory_domains": [ 00:09:34.461 { 00:09:34.461 "dma_device_id": "system", 00:09:34.461 "dma_device_type": 1 00:09:34.461 }, 00:09:34.461 { 00:09:34.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.461 "dma_device_type": 2 00:09:34.461 } 00:09:34.461 ], 00:09:34.461 "driver_specific": {} 00:09:34.461 } 00:09:34.461 ] 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.461 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.720 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.720 "name": "Existed_Raid", 00:09:34.720 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:34.720 "strip_size_kb": 64, 00:09:34.720 "state": "configuring", 00:09:34.720 "raid_level": "concat", 00:09:34.720 "superblock": true, 00:09:34.720 "num_base_bdevs": 3, 00:09:34.720 "num_base_bdevs_discovered": 2, 00:09:34.720 "num_base_bdevs_operational": 3, 00:09:34.720 "base_bdevs_list": [ 00:09:34.720 { 00:09:34.720 "name": "BaseBdev1", 00:09:34.720 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:34.720 "is_configured": true, 00:09:34.720 "data_offset": 2048, 00:09:34.720 "data_size": 63488 00:09:34.720 }, 00:09:34.720 { 00:09:34.720 "name": "BaseBdev2", 00:09:34.720 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:34.720 "is_configured": true, 00:09:34.720 "data_offset": 2048, 00:09:34.720 "data_size": 63488 00:09:34.720 }, 00:09:34.720 { 00:09:34.720 "name": "BaseBdev3", 00:09:34.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.720 "is_configured": false, 00:09:34.720 "data_offset": 0, 00:09:34.720 "data_size": 0 00:09:34.720 } 00:09:34.720 ] 00:09:34.720 }' 00:09:34.720 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.720 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.980 [2024-07-25 02:34:21.818237] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.980 [2024-07-25 02:34:21.818295] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x353e21034a00 00:09:34.980 [2024-07-25 02:34:21.818299] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.980 [2024-07-25 02:34:21.818316] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x353e21097e20 00:09:34.980 [2024-07-25 02:34:21.818347] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x353e21034a00 00:09:34.980 [2024-07-25 02:34:21.818350] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x353e21034a00 00:09:34.980 [2024-07-25 02:34:21.818364] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.980 BaseBdev3 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:34.980 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.240 02:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.240 [ 00:09:35.240 { 00:09:35.240 "name": "BaseBdev3", 00:09:35.240 "aliases": [ 00:09:35.240 "665e4c82-4a2e-11ef-9c8e-7947904e2597" 00:09:35.240 ], 00:09:35.240 "product_name": "Malloc disk", 00:09:35.240 "block_size": 512, 00:09:35.240 "num_blocks": 65536, 00:09:35.240 "uuid": "665e4c82-4a2e-11ef-9c8e-7947904e2597", 00:09:35.240 "assigned_rate_limits": { 00:09:35.240 "rw_ios_per_sec": 0, 00:09:35.240 "rw_mbytes_per_sec": 0, 00:09:35.240 "r_mbytes_per_sec": 0, 00:09:35.240 "w_mbytes_per_sec": 0 00:09:35.240 }, 00:09:35.240 "claimed": true, 00:09:35.240 "claim_type": "exclusive_write", 00:09:35.240 "zoned": false, 00:09:35.240 "supported_io_types": { 00:09:35.240 "read": true, 00:09:35.240 "write": true, 00:09:35.240 "unmap": true, 00:09:35.240 "flush": true, 00:09:35.240 "reset": true, 00:09:35.240 "nvme_admin": false, 00:09:35.240 "nvme_io": false, 00:09:35.240 "nvme_io_md": false, 00:09:35.240 "write_zeroes": true, 00:09:35.240 "zcopy": true, 00:09:35.240 "get_zone_info": false, 00:09:35.240 "zone_management": false, 00:09:35.240 "zone_append": false, 00:09:35.240 "compare": false, 00:09:35.240 "compare_and_write": false, 00:09:35.240 "abort": true, 00:09:35.240 "seek_hole": false, 00:09:35.240 "seek_data": false, 00:09:35.240 "copy": true, 00:09:35.240 "nvme_iov_md": false 00:09:35.240 }, 00:09:35.240 "memory_domains": [ 00:09:35.240 { 00:09:35.240 "dma_device_id": "system", 00:09:35.240 "dma_device_type": 1 00:09:35.240 }, 00:09:35.240 { 00:09:35.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.240 "dma_device_type": 2 00:09:35.240 } 00:09:35.240 ], 00:09:35.240 "driver_specific": {} 00:09:35.240 } 00:09:35.240 ] 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.240 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.500 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.500 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.500 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:35.500 "name": "Existed_Raid", 00:09:35.500 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:35.500 "strip_size_kb": 64, 00:09:35.500 "state": "online", 00:09:35.500 "raid_level": "concat", 00:09:35.500 "superblock": true, 00:09:35.500 "num_base_bdevs": 3, 00:09:35.500 "num_base_bdevs_discovered": 3, 00:09:35.500 "num_base_bdevs_operational": 3, 00:09:35.500 "base_bdevs_list": [ 00:09:35.500 { 00:09:35.500 "name": "BaseBdev1", 00:09:35.500 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:35.500 "is_configured": true, 00:09:35.500 "data_offset": 2048, 00:09:35.500 "data_size": 63488 00:09:35.500 }, 00:09:35.500 { 00:09:35.500 "name": "BaseBdev2", 00:09:35.500 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:35.500 "is_configured": true, 00:09:35.500 "data_offset": 2048, 00:09:35.500 "data_size": 63488 00:09:35.500 }, 00:09:35.500 { 00:09:35.500 "name": "BaseBdev3", 00:09:35.500 "uuid": "665e4c82-4a2e-11ef-9c8e-7947904e2597", 00:09:35.500 "is_configured": true, 00:09:35.500 "data_offset": 2048, 00:09:35.500 "data_size": 63488 00:09:35.500 } 00:09:35.500 ] 00:09:35.500 }' 00:09:35.500 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:35.500 02:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:35.760 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:36.019 [2024-07-25 02:34:22.774363] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:36.019 "name": "Existed_Raid", 00:09:36.019 "aliases": [ 00:09:36.019 "656a1ce7-4a2e-11ef-9c8e-7947904e2597" 00:09:36.019 ], 00:09:36.019 "product_name": "Raid Volume", 00:09:36.019 "block_size": 512, 00:09:36.019 "num_blocks": 190464, 00:09:36.019 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:36.019 "assigned_rate_limits": { 00:09:36.019 "rw_ios_per_sec": 0, 00:09:36.019 "rw_mbytes_per_sec": 0, 00:09:36.019 "r_mbytes_per_sec": 0, 00:09:36.019 "w_mbytes_per_sec": 0 00:09:36.019 }, 00:09:36.019 "claimed": false, 00:09:36.019 "zoned": false, 00:09:36.019 "supported_io_types": { 00:09:36.019 "read": true, 00:09:36.019 "write": true, 00:09:36.019 "unmap": true, 00:09:36.019 "flush": true, 00:09:36.019 "reset": true, 00:09:36.019 "nvme_admin": false, 00:09:36.019 "nvme_io": false, 00:09:36.019 "nvme_io_md": false, 00:09:36.019 "write_zeroes": true, 00:09:36.019 "zcopy": false, 00:09:36.019 "get_zone_info": false, 00:09:36.019 "zone_management": false, 00:09:36.019 "zone_append": false, 00:09:36.019 "compare": false, 00:09:36.019 "compare_and_write": false, 00:09:36.019 "abort": false, 00:09:36.019 "seek_hole": false, 00:09:36.019 "seek_data": false, 00:09:36.019 "copy": false, 00:09:36.019 "nvme_iov_md": false 00:09:36.019 }, 00:09:36.019 "memory_domains": [ 00:09:36.019 { 00:09:36.019 "dma_device_id": "system", 00:09:36.019 "dma_device_type": 1 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.019 "dma_device_type": 2 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "dma_device_id": "system", 00:09:36.019 "dma_device_type": 1 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.019 "dma_device_type": 2 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "dma_device_id": "system", 00:09:36.019 "dma_device_type": 1 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.019 "dma_device_type": 2 00:09:36.019 } 00:09:36.019 ], 00:09:36.019 "driver_specific": { 00:09:36.019 "raid": { 00:09:36.019 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:36.019 "strip_size_kb": 64, 00:09:36.019 "state": "online", 00:09:36.019 "raid_level": "concat", 00:09:36.019 "superblock": true, 00:09:36.019 "num_base_bdevs": 3, 00:09:36.019 "num_base_bdevs_discovered": 3, 00:09:36.019 "num_base_bdevs_operational": 3, 00:09:36.019 "base_bdevs_list": [ 00:09:36.019 { 00:09:36.019 "name": "BaseBdev1", 00:09:36.019 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:36.019 "is_configured": true, 00:09:36.019 "data_offset": 2048, 00:09:36.019 "data_size": 63488 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "name": "BaseBdev2", 00:09:36.019 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:36.019 "is_configured": true, 00:09:36.019 "data_offset": 2048, 00:09:36.019 "data_size": 63488 00:09:36.019 }, 00:09:36.019 { 00:09:36.019 "name": "BaseBdev3", 00:09:36.019 "uuid": "665e4c82-4a2e-11ef-9c8e-7947904e2597", 00:09:36.019 "is_configured": true, 00:09:36.019 "data_offset": 2048, 00:09:36.019 "data_size": 63488 00:09:36.019 } 00:09:36.019 ] 00:09:36.019 } 00:09:36.019 } 00:09:36.019 }' 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:36.019 BaseBdev2 00:09:36.019 BaseBdev3' 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:36.019 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:36.279 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:36.279 "name": "BaseBdev1", 00:09:36.279 "aliases": [ 00:09:36.279 "64b913ef-4a2e-11ef-9c8e-7947904e2597" 00:09:36.279 ], 00:09:36.279 "product_name": "Malloc disk", 00:09:36.279 "block_size": 512, 00:09:36.279 "num_blocks": 65536, 00:09:36.279 "uuid": "64b913ef-4a2e-11ef-9c8e-7947904e2597", 00:09:36.279 "assigned_rate_limits": { 00:09:36.279 "rw_ios_per_sec": 0, 00:09:36.279 "rw_mbytes_per_sec": 0, 00:09:36.279 "r_mbytes_per_sec": 0, 00:09:36.279 "w_mbytes_per_sec": 0 00:09:36.279 }, 00:09:36.279 "claimed": true, 00:09:36.279 "claim_type": "exclusive_write", 00:09:36.279 "zoned": false, 00:09:36.279 "supported_io_types": { 00:09:36.279 "read": true, 00:09:36.279 "write": true, 00:09:36.279 "unmap": true, 00:09:36.279 "flush": true, 00:09:36.279 "reset": true, 00:09:36.279 "nvme_admin": false, 00:09:36.279 "nvme_io": false, 00:09:36.279 "nvme_io_md": false, 00:09:36.279 "write_zeroes": true, 00:09:36.279 "zcopy": true, 00:09:36.279 "get_zone_info": false, 00:09:36.279 "zone_management": false, 00:09:36.279 "zone_append": false, 00:09:36.279 "compare": false, 00:09:36.279 "compare_and_write": false, 00:09:36.279 "abort": true, 00:09:36.279 "seek_hole": false, 00:09:36.279 "seek_data": false, 00:09:36.279 "copy": true, 00:09:36.279 "nvme_iov_md": false 00:09:36.279 }, 00:09:36.279 "memory_domains": [ 00:09:36.279 { 00:09:36.279 "dma_device_id": "system", 00:09:36.279 "dma_device_type": 1 00:09:36.279 }, 00:09:36.279 { 00:09:36.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.279 "dma_device_type": 2 00:09:36.279 } 00:09:36.279 ], 00:09:36.279 "driver_specific": {} 00:09:36.279 }' 00:09:36.279 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.279 02:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:36.279 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:36.539 "name": "BaseBdev2", 00:09:36.539 "aliases": [ 00:09:36.539 "65c82016-4a2e-11ef-9c8e-7947904e2597" 00:09:36.539 ], 00:09:36.539 "product_name": "Malloc disk", 00:09:36.539 "block_size": 512, 00:09:36.539 "num_blocks": 65536, 00:09:36.539 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:36.539 "assigned_rate_limits": { 00:09:36.539 "rw_ios_per_sec": 0, 00:09:36.539 "rw_mbytes_per_sec": 0, 00:09:36.539 "r_mbytes_per_sec": 0, 00:09:36.539 "w_mbytes_per_sec": 0 00:09:36.539 }, 00:09:36.539 "claimed": true, 00:09:36.539 "claim_type": "exclusive_write", 00:09:36.539 "zoned": false, 00:09:36.539 "supported_io_types": { 00:09:36.539 "read": true, 00:09:36.539 "write": true, 00:09:36.539 "unmap": true, 00:09:36.539 "flush": true, 00:09:36.539 "reset": true, 00:09:36.539 "nvme_admin": false, 00:09:36.539 "nvme_io": false, 00:09:36.539 "nvme_io_md": false, 00:09:36.539 "write_zeroes": true, 00:09:36.539 "zcopy": true, 00:09:36.539 "get_zone_info": false, 00:09:36.539 "zone_management": false, 00:09:36.539 "zone_append": false, 00:09:36.539 "compare": false, 00:09:36.539 "compare_and_write": false, 00:09:36.539 "abort": true, 00:09:36.539 "seek_hole": false, 00:09:36.539 "seek_data": false, 00:09:36.539 "copy": true, 00:09:36.539 "nvme_iov_md": false 00:09:36.539 }, 00:09:36.539 "memory_domains": [ 00:09:36.539 { 00:09:36.539 "dma_device_id": "system", 00:09:36.539 "dma_device_type": 1 00:09:36.539 }, 00:09:36.539 { 00:09:36.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.539 "dma_device_type": 2 00:09:36.539 } 00:09:36.539 ], 00:09:36.539 "driver_specific": {} 00:09:36.539 }' 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:36.539 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:36.798 "name": "BaseBdev3", 00:09:36.798 "aliases": [ 00:09:36.798 "665e4c82-4a2e-11ef-9c8e-7947904e2597" 00:09:36.798 ], 00:09:36.798 "product_name": "Malloc disk", 00:09:36.798 "block_size": 512, 00:09:36.798 "num_blocks": 65536, 00:09:36.798 "uuid": "665e4c82-4a2e-11ef-9c8e-7947904e2597", 00:09:36.798 "assigned_rate_limits": { 00:09:36.798 "rw_ios_per_sec": 0, 00:09:36.798 "rw_mbytes_per_sec": 0, 00:09:36.798 "r_mbytes_per_sec": 0, 00:09:36.798 "w_mbytes_per_sec": 0 00:09:36.798 }, 00:09:36.798 "claimed": true, 00:09:36.798 "claim_type": "exclusive_write", 00:09:36.798 "zoned": false, 00:09:36.798 "supported_io_types": { 00:09:36.798 "read": true, 00:09:36.798 "write": true, 00:09:36.798 "unmap": true, 00:09:36.798 "flush": true, 00:09:36.798 "reset": true, 00:09:36.798 "nvme_admin": false, 00:09:36.798 "nvme_io": false, 00:09:36.798 "nvme_io_md": false, 00:09:36.798 "write_zeroes": true, 00:09:36.798 "zcopy": true, 00:09:36.798 "get_zone_info": false, 00:09:36.798 "zone_management": false, 00:09:36.798 "zone_append": false, 00:09:36.798 "compare": false, 00:09:36.798 "compare_and_write": false, 00:09:36.798 "abort": true, 00:09:36.798 "seek_hole": false, 00:09:36.798 "seek_data": false, 00:09:36.798 "copy": true, 00:09:36.798 "nvme_iov_md": false 00:09:36.798 }, 00:09:36.798 "memory_domains": [ 00:09:36.798 { 00:09:36.798 "dma_device_id": "system", 00:09:36.798 "dma_device_type": 1 00:09:36.798 }, 00:09:36.798 { 00:09:36.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.798 "dma_device_type": 2 00:09:36.798 } 00:09:36.798 ], 00:09:36.798 "driver_specific": {} 00:09:36.798 }' 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:36.798 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:37.058 [2024-07-25 02:34:23.782530] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.058 [2024-07-25 02:34:23.782542] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.058 [2024-07-25 02:34:23.782552] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.058 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.317 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:37.317 "name": "Existed_Raid", 00:09:37.317 "uuid": "656a1ce7-4a2e-11ef-9c8e-7947904e2597", 00:09:37.317 "strip_size_kb": 64, 00:09:37.317 "state": "offline", 00:09:37.317 "raid_level": "concat", 00:09:37.317 "superblock": true, 00:09:37.317 "num_base_bdevs": 3, 00:09:37.317 "num_base_bdevs_discovered": 2, 00:09:37.317 "num_base_bdevs_operational": 2, 00:09:37.317 "base_bdevs_list": [ 00:09:37.317 { 00:09:37.317 "name": null, 00:09:37.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.317 "is_configured": false, 00:09:37.317 "data_offset": 2048, 00:09:37.317 "data_size": 63488 00:09:37.317 }, 00:09:37.317 { 00:09:37.317 "name": "BaseBdev2", 00:09:37.317 "uuid": "65c82016-4a2e-11ef-9c8e-7947904e2597", 00:09:37.317 "is_configured": true, 00:09:37.317 "data_offset": 2048, 00:09:37.317 "data_size": 63488 00:09:37.317 }, 00:09:37.317 { 00:09:37.317 "name": "BaseBdev3", 00:09:37.317 "uuid": "665e4c82-4a2e-11ef-9c8e-7947904e2597", 00:09:37.317 "is_configured": true, 00:09:37.317 "data_offset": 2048, 00:09:37.317 "data_size": 63488 00:09:37.317 } 00:09:37.317 ] 00:09:37.317 }' 00:09:37.317 02:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:37.317 02:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.577 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:37.837 [2024-07-25 02:34:24.555351] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.837 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:37.837 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:37.837 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:37.837 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:38.097 [2024-07-25 02:34:24.928054] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.097 [2024-07-25 02:34:24.928070] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x353e21034a00 name Existed_Raid, state offline 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.097 02:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:38.357 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.616 BaseBdev2 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:38.616 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.876 [ 00:09:38.876 { 00:09:38.876 "name": "BaseBdev2", 00:09:38.876 "aliases": [ 00:09:38.876 "68707d5b-4a2e-11ef-9c8e-7947904e2597" 00:09:38.876 ], 00:09:38.876 "product_name": "Malloc disk", 00:09:38.876 "block_size": 512, 00:09:38.876 "num_blocks": 65536, 00:09:38.876 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:38.876 "assigned_rate_limits": { 00:09:38.876 "rw_ios_per_sec": 0, 00:09:38.876 "rw_mbytes_per_sec": 0, 00:09:38.876 "r_mbytes_per_sec": 0, 00:09:38.876 "w_mbytes_per_sec": 0 00:09:38.876 }, 00:09:38.876 "claimed": false, 00:09:38.876 "zoned": false, 00:09:38.876 "supported_io_types": { 00:09:38.876 "read": true, 00:09:38.876 "write": true, 00:09:38.876 "unmap": true, 00:09:38.876 "flush": true, 00:09:38.876 "reset": true, 00:09:38.876 "nvme_admin": false, 00:09:38.876 "nvme_io": false, 00:09:38.876 "nvme_io_md": false, 00:09:38.876 "write_zeroes": true, 00:09:38.876 "zcopy": true, 00:09:38.876 "get_zone_info": false, 00:09:38.876 "zone_management": false, 00:09:38.876 "zone_append": false, 00:09:38.876 "compare": false, 00:09:38.876 "compare_and_write": false, 00:09:38.876 "abort": true, 00:09:38.876 "seek_hole": false, 00:09:38.876 "seek_data": false, 00:09:38.876 "copy": true, 00:09:38.876 "nvme_iov_md": false 00:09:38.876 }, 00:09:38.876 "memory_domains": [ 00:09:38.876 { 00:09:38.876 "dma_device_id": "system", 00:09:38.876 "dma_device_type": 1 00:09:38.876 }, 00:09:38.876 { 00:09:38.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.876 "dma_device_type": 2 00:09:38.876 } 00:09:38.876 ], 00:09:38.876 "driver_specific": {} 00:09:38.876 } 00:09:38.876 ] 00:09:38.876 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:38.876 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:38.876 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:38.876 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.136 BaseBdev3 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:39.136 02:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.396 [ 00:09:39.396 { 00:09:39.396 "name": "BaseBdev3", 00:09:39.396 "aliases": [ 00:09:39.396 "68bf3d80-4a2e-11ef-9c8e-7947904e2597" 00:09:39.396 ], 00:09:39.396 "product_name": "Malloc disk", 00:09:39.396 "block_size": 512, 00:09:39.396 "num_blocks": 65536, 00:09:39.396 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:39.396 "assigned_rate_limits": { 00:09:39.396 "rw_ios_per_sec": 0, 00:09:39.396 "rw_mbytes_per_sec": 0, 00:09:39.396 "r_mbytes_per_sec": 0, 00:09:39.396 "w_mbytes_per_sec": 0 00:09:39.396 }, 00:09:39.396 "claimed": false, 00:09:39.396 "zoned": false, 00:09:39.396 "supported_io_types": { 00:09:39.396 "read": true, 00:09:39.396 "write": true, 00:09:39.396 "unmap": true, 00:09:39.396 "flush": true, 00:09:39.396 "reset": true, 00:09:39.396 "nvme_admin": false, 00:09:39.396 "nvme_io": false, 00:09:39.396 "nvme_io_md": false, 00:09:39.396 "write_zeroes": true, 00:09:39.396 "zcopy": true, 00:09:39.396 "get_zone_info": false, 00:09:39.396 "zone_management": false, 00:09:39.396 "zone_append": false, 00:09:39.396 "compare": false, 00:09:39.396 "compare_and_write": false, 00:09:39.396 "abort": true, 00:09:39.396 "seek_hole": false, 00:09:39.396 "seek_data": false, 00:09:39.396 "copy": true, 00:09:39.396 "nvme_iov_md": false 00:09:39.396 }, 00:09:39.396 "memory_domains": [ 00:09:39.396 { 00:09:39.396 "dma_device_id": "system", 00:09:39.396 "dma_device_type": 1 00:09:39.396 }, 00:09:39.396 { 00:09:39.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.396 "dma_device_type": 2 00:09:39.396 } 00:09:39.396 ], 00:09:39.396 "driver_specific": {} 00:09:39.396 } 00:09:39.396 ] 00:09:39.396 02:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:39.396 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:39.396 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:39.396 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:39.657 [2024-07-25 02:34:26.349025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.657 [2024-07-25 02:34:26.349060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.657 [2024-07-25 02:34:26.349065] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.657 [2024-07-25 02:34:26.349523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.657 "name": "Existed_Raid", 00:09:39.657 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:39.657 "strip_size_kb": 64, 00:09:39.657 "state": "configuring", 00:09:39.657 "raid_level": "concat", 00:09:39.657 "superblock": true, 00:09:39.657 "num_base_bdevs": 3, 00:09:39.657 "num_base_bdevs_discovered": 2, 00:09:39.657 "num_base_bdevs_operational": 3, 00:09:39.657 "base_bdevs_list": [ 00:09:39.657 { 00:09:39.657 "name": "BaseBdev1", 00:09:39.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.657 "is_configured": false, 00:09:39.657 "data_offset": 0, 00:09:39.657 "data_size": 0 00:09:39.657 }, 00:09:39.657 { 00:09:39.657 "name": "BaseBdev2", 00:09:39.657 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:39.657 "is_configured": true, 00:09:39.657 "data_offset": 2048, 00:09:39.657 "data_size": 63488 00:09:39.657 }, 00:09:39.657 { 00:09:39.657 "name": "BaseBdev3", 00:09:39.657 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:39.657 "is_configured": true, 00:09:39.657 "data_offset": 2048, 00:09:39.657 "data_size": 63488 00:09:39.657 } 00:09:39.657 ] 00:09:39.657 }' 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.657 02:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.917 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:40.176 [2024-07-25 02:34:26.985126] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.176 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.176 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.176 02:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.176 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.437 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.437 "name": "Existed_Raid", 00:09:40.437 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:40.437 "strip_size_kb": 64, 00:09:40.437 "state": "configuring", 00:09:40.437 "raid_level": "concat", 00:09:40.437 "superblock": true, 00:09:40.437 "num_base_bdevs": 3, 00:09:40.437 "num_base_bdevs_discovered": 1, 00:09:40.437 "num_base_bdevs_operational": 3, 00:09:40.437 "base_bdevs_list": [ 00:09:40.437 { 00:09:40.437 "name": "BaseBdev1", 00:09:40.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.437 "is_configured": false, 00:09:40.437 "data_offset": 0, 00:09:40.437 "data_size": 0 00:09:40.437 }, 00:09:40.437 { 00:09:40.437 "name": null, 00:09:40.437 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:40.437 "is_configured": false, 00:09:40.437 "data_offset": 2048, 00:09:40.437 "data_size": 63488 00:09:40.437 }, 00:09:40.437 { 00:09:40.437 "name": "BaseBdev3", 00:09:40.437 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:40.437 "is_configured": true, 00:09:40.437 "data_offset": 2048, 00:09:40.437 "data_size": 63488 00:09:40.437 } 00:09:40.437 ] 00:09:40.437 }' 00:09:40.437 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.437 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.706 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.706 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.985 [2024-07-25 02:34:27.789374] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.985 BaseBdev1 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:40.985 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:41.246 02:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.246 [ 00:09:41.246 { 00:09:41.246 "name": "BaseBdev1", 00:09:41.246 "aliases": [ 00:09:41.246 "69ed6b20-4a2e-11ef-9c8e-7947904e2597" 00:09:41.246 ], 00:09:41.246 "product_name": "Malloc disk", 00:09:41.246 "block_size": 512, 00:09:41.246 "num_blocks": 65536, 00:09:41.246 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:41.246 "assigned_rate_limits": { 00:09:41.246 "rw_ios_per_sec": 0, 00:09:41.246 "rw_mbytes_per_sec": 0, 00:09:41.246 "r_mbytes_per_sec": 0, 00:09:41.246 "w_mbytes_per_sec": 0 00:09:41.246 }, 00:09:41.246 "claimed": true, 00:09:41.246 "claim_type": "exclusive_write", 00:09:41.246 "zoned": false, 00:09:41.246 "supported_io_types": { 00:09:41.246 "read": true, 00:09:41.246 "write": true, 00:09:41.246 "unmap": true, 00:09:41.246 "flush": true, 00:09:41.246 "reset": true, 00:09:41.246 "nvme_admin": false, 00:09:41.246 "nvme_io": false, 00:09:41.246 "nvme_io_md": false, 00:09:41.246 "write_zeroes": true, 00:09:41.246 "zcopy": true, 00:09:41.246 "get_zone_info": false, 00:09:41.246 "zone_management": false, 00:09:41.246 "zone_append": false, 00:09:41.246 "compare": false, 00:09:41.246 "compare_and_write": false, 00:09:41.246 "abort": true, 00:09:41.246 "seek_hole": false, 00:09:41.246 "seek_data": false, 00:09:41.246 "copy": true, 00:09:41.246 "nvme_iov_md": false 00:09:41.246 }, 00:09:41.246 "memory_domains": [ 00:09:41.246 { 00:09:41.246 "dma_device_id": "system", 00:09:41.246 "dma_device_type": 1 00:09:41.246 }, 00:09:41.246 { 00:09:41.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.246 "dma_device_type": 2 00:09:41.246 } 00:09:41.246 ], 00:09:41.246 "driver_specific": {} 00:09:41.246 } 00:09:41.246 ] 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.246 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.505 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.505 "name": "Existed_Raid", 00:09:41.505 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:41.505 "strip_size_kb": 64, 00:09:41.505 "state": "configuring", 00:09:41.505 "raid_level": "concat", 00:09:41.505 "superblock": true, 00:09:41.505 "num_base_bdevs": 3, 00:09:41.505 "num_base_bdevs_discovered": 2, 00:09:41.505 "num_base_bdevs_operational": 3, 00:09:41.505 "base_bdevs_list": [ 00:09:41.505 { 00:09:41.505 "name": "BaseBdev1", 00:09:41.505 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:41.506 "is_configured": true, 00:09:41.506 "data_offset": 2048, 00:09:41.506 "data_size": 63488 00:09:41.506 }, 00:09:41.506 { 00:09:41.506 "name": null, 00:09:41.506 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:41.506 "is_configured": false, 00:09:41.506 "data_offset": 2048, 00:09:41.506 "data_size": 63488 00:09:41.506 }, 00:09:41.506 { 00:09:41.506 "name": "BaseBdev3", 00:09:41.506 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:41.506 "is_configured": true, 00:09:41.506 "data_offset": 2048, 00:09:41.506 "data_size": 63488 00:09:41.506 } 00:09:41.506 ] 00:09:41.506 }' 00:09:41.506 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.506 02:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.765 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.765 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.025 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:42.025 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:42.025 [2024-07-25 02:34:28.933465] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.285 02:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.285 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.285 "name": "Existed_Raid", 00:09:42.285 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:42.285 "strip_size_kb": 64, 00:09:42.285 "state": "configuring", 00:09:42.285 "raid_level": "concat", 00:09:42.285 "superblock": true, 00:09:42.285 "num_base_bdevs": 3, 00:09:42.285 "num_base_bdevs_discovered": 1, 00:09:42.285 "num_base_bdevs_operational": 3, 00:09:42.285 "base_bdevs_list": [ 00:09:42.285 { 00:09:42.285 "name": "BaseBdev1", 00:09:42.285 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:42.285 "is_configured": true, 00:09:42.285 "data_offset": 2048, 00:09:42.285 "data_size": 63488 00:09:42.285 }, 00:09:42.285 { 00:09:42.285 "name": null, 00:09:42.285 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:42.285 "is_configured": false, 00:09:42.285 "data_offset": 2048, 00:09:42.285 "data_size": 63488 00:09:42.285 }, 00:09:42.285 { 00:09:42.285 "name": null, 00:09:42.285 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:42.285 "is_configured": false, 00:09:42.285 "data_offset": 2048, 00:09:42.285 "data_size": 63488 00:09:42.285 } 00:09:42.285 ] 00:09:42.285 }' 00:09:42.285 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.285 02:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.545 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.545 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.805 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:42.805 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:43.065 [2024-07-25 02:34:29.753617] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.065 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.066 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:43.066 "name": "Existed_Raid", 00:09:43.066 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:43.066 "strip_size_kb": 64, 00:09:43.066 "state": "configuring", 00:09:43.066 "raid_level": "concat", 00:09:43.066 "superblock": true, 00:09:43.066 "num_base_bdevs": 3, 00:09:43.066 "num_base_bdevs_discovered": 2, 00:09:43.066 "num_base_bdevs_operational": 3, 00:09:43.066 "base_bdevs_list": [ 00:09:43.066 { 00:09:43.066 "name": "BaseBdev1", 00:09:43.066 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:43.066 "is_configured": true, 00:09:43.066 "data_offset": 2048, 00:09:43.066 "data_size": 63488 00:09:43.066 }, 00:09:43.066 { 00:09:43.066 "name": null, 00:09:43.066 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:43.066 "is_configured": false, 00:09:43.066 "data_offset": 2048, 00:09:43.066 "data_size": 63488 00:09:43.066 }, 00:09:43.066 { 00:09:43.066 "name": "BaseBdev3", 00:09:43.066 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:43.066 "is_configured": true, 00:09:43.066 "data_offset": 2048, 00:09:43.066 "data_size": 63488 00:09:43.066 } 00:09:43.066 ] 00:09:43.066 }' 00:09:43.066 02:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:43.066 02:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.326 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.326 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:43.586 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:43.586 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:43.846 [2024-07-25 02:34:30.561750] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.846 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.105 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.105 "name": "Existed_Raid", 00:09:44.105 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:44.105 "strip_size_kb": 64, 00:09:44.105 "state": "configuring", 00:09:44.105 "raid_level": "concat", 00:09:44.105 "superblock": true, 00:09:44.105 "num_base_bdevs": 3, 00:09:44.105 "num_base_bdevs_discovered": 1, 00:09:44.105 "num_base_bdevs_operational": 3, 00:09:44.105 "base_bdevs_list": [ 00:09:44.105 { 00:09:44.105 "name": null, 00:09:44.105 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:44.105 "is_configured": false, 00:09:44.105 "data_offset": 2048, 00:09:44.105 "data_size": 63488 00:09:44.105 }, 00:09:44.105 { 00:09:44.105 "name": null, 00:09:44.105 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:44.105 "is_configured": false, 00:09:44.105 "data_offset": 2048, 00:09:44.105 "data_size": 63488 00:09:44.105 }, 00:09:44.105 { 00:09:44.105 "name": "BaseBdev3", 00:09:44.105 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:44.105 "is_configured": true, 00:09:44.105 "data_offset": 2048, 00:09:44.105 "data_size": 63488 00:09:44.105 } 00:09:44.105 ] 00:09:44.105 }' 00:09:44.105 02:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.105 02:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.365 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.365 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.365 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:44.365 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:44.624 [2024-07-25 02:34:31.382533] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.624 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.884 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.884 "name": "Existed_Raid", 00:09:44.884 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:44.884 "strip_size_kb": 64, 00:09:44.884 "state": "configuring", 00:09:44.884 "raid_level": "concat", 00:09:44.884 "superblock": true, 00:09:44.884 "num_base_bdevs": 3, 00:09:44.884 "num_base_bdevs_discovered": 2, 00:09:44.884 "num_base_bdevs_operational": 3, 00:09:44.884 "base_bdevs_list": [ 00:09:44.884 { 00:09:44.884 "name": null, 00:09:44.884 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:44.884 "is_configured": false, 00:09:44.884 "data_offset": 2048, 00:09:44.884 "data_size": 63488 00:09:44.884 }, 00:09:44.884 { 00:09:44.884 "name": "BaseBdev2", 00:09:44.884 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:44.884 "is_configured": true, 00:09:44.884 "data_offset": 2048, 00:09:44.884 "data_size": 63488 00:09:44.884 }, 00:09:44.884 { 00:09:44.884 "name": "BaseBdev3", 00:09:44.884 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:44.884 "is_configured": true, 00:09:44.884 "data_offset": 2048, 00:09:44.884 "data_size": 63488 00:09:44.884 } 00:09:44.884 ] 00:09:44.884 }' 00:09:44.884 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.884 02:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.143 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.143 02:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.143 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:45.143 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.143 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:45.403 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 69ed6b20-4a2e-11ef-9c8e-7947904e2597 00:09:45.661 [2024-07-25 02:34:32.330774] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:45.661 [2024-07-25 02:34:32.330805] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x353e21034a00 00:09:45.661 [2024-07-25 02:34:32.330809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.661 [2024-07-25 02:34:32.330823] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x353e21097e20 00:09:45.661 [2024-07-25 02:34:32.330852] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x353e21034a00 00:09:45.661 [2024-07-25 02:34:32.330855] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x353e21034a00 00:09:45.661 [2024-07-25 02:34:32.330870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.661 NewBaseBdev 00:09:45.661 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:45.661 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:45.661 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:45.661 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:45.661 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:45.662 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:45.662 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.662 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:45.920 [ 00:09:45.920 { 00:09:45.920 "name": "NewBaseBdev", 00:09:45.920 "aliases": [ 00:09:45.920 "69ed6b20-4a2e-11ef-9c8e-7947904e2597" 00:09:45.920 ], 00:09:45.920 "product_name": "Malloc disk", 00:09:45.920 "block_size": 512, 00:09:45.920 "num_blocks": 65536, 00:09:45.920 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:45.920 "assigned_rate_limits": { 00:09:45.920 "rw_ios_per_sec": 0, 00:09:45.920 "rw_mbytes_per_sec": 0, 00:09:45.920 "r_mbytes_per_sec": 0, 00:09:45.920 "w_mbytes_per_sec": 0 00:09:45.920 }, 00:09:45.920 "claimed": true, 00:09:45.920 "claim_type": "exclusive_write", 00:09:45.920 "zoned": false, 00:09:45.920 "supported_io_types": { 00:09:45.920 "read": true, 00:09:45.920 "write": true, 00:09:45.920 "unmap": true, 00:09:45.920 "flush": true, 00:09:45.920 "reset": true, 00:09:45.920 "nvme_admin": false, 00:09:45.920 "nvme_io": false, 00:09:45.920 "nvme_io_md": false, 00:09:45.920 "write_zeroes": true, 00:09:45.920 "zcopy": true, 00:09:45.920 "get_zone_info": false, 00:09:45.920 "zone_management": false, 00:09:45.920 "zone_append": false, 00:09:45.920 "compare": false, 00:09:45.920 "compare_and_write": false, 00:09:45.920 "abort": true, 00:09:45.920 "seek_hole": false, 00:09:45.920 "seek_data": false, 00:09:45.920 "copy": true, 00:09:45.920 "nvme_iov_md": false 00:09:45.920 }, 00:09:45.920 "memory_domains": [ 00:09:45.920 { 00:09:45.920 "dma_device_id": "system", 00:09:45.920 "dma_device_type": 1 00:09:45.920 }, 00:09:45.920 { 00:09:45.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.920 "dma_device_type": 2 00:09:45.920 } 00:09:45.920 ], 00:09:45.920 "driver_specific": {} 00:09:45.920 } 00:09:45.920 ] 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:45.920 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.921 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.180 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.180 "name": "Existed_Raid", 00:09:46.180 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:46.180 "strip_size_kb": 64, 00:09:46.180 "state": "online", 00:09:46.180 "raid_level": "concat", 00:09:46.180 "superblock": true, 00:09:46.180 "num_base_bdevs": 3, 00:09:46.180 "num_base_bdevs_discovered": 3, 00:09:46.180 "num_base_bdevs_operational": 3, 00:09:46.180 "base_bdevs_list": [ 00:09:46.180 { 00:09:46.180 "name": "NewBaseBdev", 00:09:46.180 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:46.180 "is_configured": true, 00:09:46.180 "data_offset": 2048, 00:09:46.180 "data_size": 63488 00:09:46.180 }, 00:09:46.180 { 00:09:46.180 "name": "BaseBdev2", 00:09:46.180 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:46.180 "is_configured": true, 00:09:46.180 "data_offset": 2048, 00:09:46.180 "data_size": 63488 00:09:46.180 }, 00:09:46.180 { 00:09:46.180 "name": "BaseBdev3", 00:09:46.180 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:46.180 "is_configured": true, 00:09:46.180 "data_offset": 2048, 00:09:46.180 "data_size": 63488 00:09:46.180 } 00:09:46.180 ] 00:09:46.180 }' 00:09:46.180 02:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.180 02:34:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:46.440 [2024-07-25 02:34:33.298876] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:46.440 "name": "Existed_Raid", 00:09:46.440 "aliases": [ 00:09:46.440 "6911a6b7-4a2e-11ef-9c8e-7947904e2597" 00:09:46.440 ], 00:09:46.440 "product_name": "Raid Volume", 00:09:46.440 "block_size": 512, 00:09:46.440 "num_blocks": 190464, 00:09:46.440 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:46.440 "assigned_rate_limits": { 00:09:46.440 "rw_ios_per_sec": 0, 00:09:46.440 "rw_mbytes_per_sec": 0, 00:09:46.440 "r_mbytes_per_sec": 0, 00:09:46.440 "w_mbytes_per_sec": 0 00:09:46.440 }, 00:09:46.440 "claimed": false, 00:09:46.440 "zoned": false, 00:09:46.440 "supported_io_types": { 00:09:46.440 "read": true, 00:09:46.440 "write": true, 00:09:46.440 "unmap": true, 00:09:46.440 "flush": true, 00:09:46.440 "reset": true, 00:09:46.440 "nvme_admin": false, 00:09:46.440 "nvme_io": false, 00:09:46.440 "nvme_io_md": false, 00:09:46.440 "write_zeroes": true, 00:09:46.440 "zcopy": false, 00:09:46.440 "get_zone_info": false, 00:09:46.440 "zone_management": false, 00:09:46.440 "zone_append": false, 00:09:46.440 "compare": false, 00:09:46.440 "compare_and_write": false, 00:09:46.440 "abort": false, 00:09:46.440 "seek_hole": false, 00:09:46.440 "seek_data": false, 00:09:46.440 "copy": false, 00:09:46.440 "nvme_iov_md": false 00:09:46.440 }, 00:09:46.440 "memory_domains": [ 00:09:46.440 { 00:09:46.440 "dma_device_id": "system", 00:09:46.440 "dma_device_type": 1 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.440 "dma_device_type": 2 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "dma_device_id": "system", 00:09:46.440 "dma_device_type": 1 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.440 "dma_device_type": 2 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "dma_device_id": "system", 00:09:46.440 "dma_device_type": 1 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.440 "dma_device_type": 2 00:09:46.440 } 00:09:46.440 ], 00:09:46.440 "driver_specific": { 00:09:46.440 "raid": { 00:09:46.440 "uuid": "6911a6b7-4a2e-11ef-9c8e-7947904e2597", 00:09:46.440 "strip_size_kb": 64, 00:09:46.440 "state": "online", 00:09:46.440 "raid_level": "concat", 00:09:46.440 "superblock": true, 00:09:46.440 "num_base_bdevs": 3, 00:09:46.440 "num_base_bdevs_discovered": 3, 00:09:46.440 "num_base_bdevs_operational": 3, 00:09:46.440 "base_bdevs_list": [ 00:09:46.440 { 00:09:46.440 "name": "NewBaseBdev", 00:09:46.440 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:46.440 "is_configured": true, 00:09:46.440 "data_offset": 2048, 00:09:46.440 "data_size": 63488 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "name": "BaseBdev2", 00:09:46.440 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:46.440 "is_configured": true, 00:09:46.440 "data_offset": 2048, 00:09:46.440 "data_size": 63488 00:09:46.440 }, 00:09:46.440 { 00:09:46.440 "name": "BaseBdev3", 00:09:46.440 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:46.440 "is_configured": true, 00:09:46.440 "data_offset": 2048, 00:09:46.440 "data_size": 63488 00:09:46.440 } 00:09:46.440 ] 00:09:46.440 } 00:09:46.440 } 00:09:46.440 }' 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:46.440 BaseBdev2 00:09:46.440 BaseBdev3' 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:46.440 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:46.699 "name": "NewBaseBdev", 00:09:46.699 "aliases": [ 00:09:46.699 "69ed6b20-4a2e-11ef-9c8e-7947904e2597" 00:09:46.699 ], 00:09:46.699 "product_name": "Malloc disk", 00:09:46.699 "block_size": 512, 00:09:46.699 "num_blocks": 65536, 00:09:46.699 "uuid": "69ed6b20-4a2e-11ef-9c8e-7947904e2597", 00:09:46.699 "assigned_rate_limits": { 00:09:46.699 "rw_ios_per_sec": 0, 00:09:46.699 "rw_mbytes_per_sec": 0, 00:09:46.699 "r_mbytes_per_sec": 0, 00:09:46.699 "w_mbytes_per_sec": 0 00:09:46.699 }, 00:09:46.699 "claimed": true, 00:09:46.699 "claim_type": "exclusive_write", 00:09:46.699 "zoned": false, 00:09:46.699 "supported_io_types": { 00:09:46.699 "read": true, 00:09:46.699 "write": true, 00:09:46.699 "unmap": true, 00:09:46.699 "flush": true, 00:09:46.699 "reset": true, 00:09:46.699 "nvme_admin": false, 00:09:46.699 "nvme_io": false, 00:09:46.699 "nvme_io_md": false, 00:09:46.699 "write_zeroes": true, 00:09:46.699 "zcopy": true, 00:09:46.699 "get_zone_info": false, 00:09:46.699 "zone_management": false, 00:09:46.699 "zone_append": false, 00:09:46.699 "compare": false, 00:09:46.699 "compare_and_write": false, 00:09:46.699 "abort": true, 00:09:46.699 "seek_hole": false, 00:09:46.699 "seek_data": false, 00:09:46.699 "copy": true, 00:09:46.699 "nvme_iov_md": false 00:09:46.699 }, 00:09:46.699 "memory_domains": [ 00:09:46.699 { 00:09:46.699 "dma_device_id": "system", 00:09:46.699 "dma_device_type": 1 00:09:46.699 }, 00:09:46.699 { 00:09:46.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.699 "dma_device_type": 2 00:09:46.699 } 00:09:46.699 ], 00:09:46.699 "driver_specific": {} 00:09:46.699 }' 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.699 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:46.700 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:46.958 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:46.958 "name": "BaseBdev2", 00:09:46.958 "aliases": [ 00:09:46.958 "68707d5b-4a2e-11ef-9c8e-7947904e2597" 00:09:46.958 ], 00:09:46.958 "product_name": "Malloc disk", 00:09:46.958 "block_size": 512, 00:09:46.958 "num_blocks": 65536, 00:09:46.958 "uuid": "68707d5b-4a2e-11ef-9c8e-7947904e2597", 00:09:46.958 "assigned_rate_limits": { 00:09:46.958 "rw_ios_per_sec": 0, 00:09:46.958 "rw_mbytes_per_sec": 0, 00:09:46.958 "r_mbytes_per_sec": 0, 00:09:46.958 "w_mbytes_per_sec": 0 00:09:46.958 }, 00:09:46.958 "claimed": true, 00:09:46.958 "claim_type": "exclusive_write", 00:09:46.958 "zoned": false, 00:09:46.958 "supported_io_types": { 00:09:46.958 "read": true, 00:09:46.958 "write": true, 00:09:46.958 "unmap": true, 00:09:46.958 "flush": true, 00:09:46.958 "reset": true, 00:09:46.958 "nvme_admin": false, 00:09:46.958 "nvme_io": false, 00:09:46.958 "nvme_io_md": false, 00:09:46.958 "write_zeroes": true, 00:09:46.958 "zcopy": true, 00:09:46.958 "get_zone_info": false, 00:09:46.958 "zone_management": false, 00:09:46.958 "zone_append": false, 00:09:46.958 "compare": false, 00:09:46.958 "compare_and_write": false, 00:09:46.958 "abort": true, 00:09:46.958 "seek_hole": false, 00:09:46.958 "seek_data": false, 00:09:46.958 "copy": true, 00:09:46.958 "nvme_iov_md": false 00:09:46.958 }, 00:09:46.958 "memory_domains": [ 00:09:46.958 { 00:09:46.958 "dma_device_id": "system", 00:09:46.958 "dma_device_type": 1 00:09:46.959 }, 00:09:46.959 { 00:09:46.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.959 "dma_device_type": 2 00:09:46.959 } 00:09:46.959 ], 00:09:46.959 "driver_specific": {} 00:09:46.959 }' 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:46.959 02:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.218 "name": "BaseBdev3", 00:09:47.218 "aliases": [ 00:09:47.218 "68bf3d80-4a2e-11ef-9c8e-7947904e2597" 00:09:47.218 ], 00:09:47.218 "product_name": "Malloc disk", 00:09:47.218 "block_size": 512, 00:09:47.218 "num_blocks": 65536, 00:09:47.218 "uuid": "68bf3d80-4a2e-11ef-9c8e-7947904e2597", 00:09:47.218 "assigned_rate_limits": { 00:09:47.218 "rw_ios_per_sec": 0, 00:09:47.218 "rw_mbytes_per_sec": 0, 00:09:47.218 "r_mbytes_per_sec": 0, 00:09:47.218 "w_mbytes_per_sec": 0 00:09:47.218 }, 00:09:47.218 "claimed": true, 00:09:47.218 "claim_type": "exclusive_write", 00:09:47.218 "zoned": false, 00:09:47.218 "supported_io_types": { 00:09:47.218 "read": true, 00:09:47.218 "write": true, 00:09:47.218 "unmap": true, 00:09:47.218 "flush": true, 00:09:47.218 "reset": true, 00:09:47.218 "nvme_admin": false, 00:09:47.218 "nvme_io": false, 00:09:47.218 "nvme_io_md": false, 00:09:47.218 "write_zeroes": true, 00:09:47.218 "zcopy": true, 00:09:47.218 "get_zone_info": false, 00:09:47.218 "zone_management": false, 00:09:47.218 "zone_append": false, 00:09:47.218 "compare": false, 00:09:47.218 "compare_and_write": false, 00:09:47.218 "abort": true, 00:09:47.218 "seek_hole": false, 00:09:47.218 "seek_data": false, 00:09:47.218 "copy": true, 00:09:47.218 "nvme_iov_md": false 00:09:47.218 }, 00:09:47.218 "memory_domains": [ 00:09:47.218 { 00:09:47.218 "dma_device_id": "system", 00:09:47.218 "dma_device_type": 1 00:09:47.218 }, 00:09:47.218 { 00:09:47.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.218 "dma_device_type": 2 00:09:47.218 } 00:09:47.218 ], 00:09:47.218 "driver_specific": {} 00:09:47.218 }' 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.218 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:47.478 [2024-07-25 02:34:34.291012] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.478 [2024-07-25 02:34:34.291026] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.478 [2024-07-25 02:34:34.291037] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.478 [2024-07-25 02:34:34.291061] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.478 [2024-07-25 02:34:34.291064] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x353e21034a00 name Existed_Raid, state offline 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54575 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54575 ']' 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54575 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54575 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:47.478 killing process with pid 54575 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54575' 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54575 00:09:47.478 [2024-07-25 02:34:34.319198] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.478 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54575 00:09:47.478 [2024-07-25 02:34:34.332979] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.738 02:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:47.738 00:09:47.738 real 0m17.518s 00:09:47.738 user 0m31.387s 00:09:47.738 sys 0m3.016s 00:09:47.738 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.738 02:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.738 ************************************ 00:09:47.738 END TEST raid_state_function_test_sb 00:09:47.738 ************************************ 00:09:47.738 02:34:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:47.738 02:34:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:47.738 02:34:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:47.738 02:34:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.738 02:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.738 ************************************ 00:09:47.738 START TEST raid_superblock_test 00:09:47.738 ************************************ 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55279 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55279 /var/tmp/spdk-raid.sock 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55279 ']' 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.738 02:34:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.738 [2024-07-25 02:34:34.580778] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:47.738 [2024-07-25 02:34:34.581112] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:48.307 EAL: TSC is not safe to use in SMP mode 00:09:48.307 EAL: TSC is not invariant 00:09:48.307 [2024-07-25 02:34:34.995685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.307 [2024-07-25 02:34:35.087500] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:48.307 [2024-07-25 02:34:35.089174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.307 [2024-07-25 02:34:35.089736] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.307 [2024-07-25 02:34:35.089748] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.566 02:34:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.566 02:34:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:48.826 malloc1 00:09:48.826 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.089 [2024-07-25 02:34:35.780733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.089 [2024-07-25 02:34:35.780774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.089 [2024-07-25 02:34:35.780782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027434780 00:09:49.089 [2024-07-25 02:34:35.780788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.089 [2024-07-25 02:34:35.781450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.089 [2024-07-25 02:34:35.781480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.089 pt1 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:49.089 malloc2 00:09:49.089 02:34:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:49.348 [2024-07-25 02:34:36.120762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:49.348 [2024-07-25 02:34:36.120797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.348 [2024-07-25 02:34:36.120804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027434c80 00:09:49.348 [2024-07-25 02:34:36.120810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.348 [2024-07-25 02:34:36.121262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.348 [2024-07-25 02:34:36.121288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:49.348 pt2 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:49.348 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:49.607 malloc3 00:09:49.608 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:49.608 [2024-07-25 02:34:36.484782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:49.608 [2024-07-25 02:34:36.484819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.608 [2024-07-25 02:34:36.484827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027435180 00:09:49.608 [2024-07-25 02:34:36.484832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.608 [2024-07-25 02:34:36.485253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.608 [2024-07-25 02:34:36.485277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:49.608 pt3 00:09:49.608 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:49.608 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:49.608 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:49.868 [2024-07-25 02:34:36.668798] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.868 [2024-07-25 02:34:36.669165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:49.868 [2024-07-25 02:34:36.669185] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:49.868 [2024-07-25 02:34:36.669225] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fd027435400 00:09:49.869 [2024-07-25 02:34:36.669230] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.869 [2024-07-25 02:34:36.669255] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fd027497e20 00:09:49.869 [2024-07-25 02:34:36.669305] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fd027435400 00:09:49.869 [2024-07-25 02:34:36.669308] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fd027435400 00:09:49.869 [2024-07-25 02:34:36.669326] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.869 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.128 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.128 "name": "raid_bdev1", 00:09:50.128 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:50.129 "strip_size_kb": 64, 00:09:50.129 "state": "online", 00:09:50.129 "raid_level": "concat", 00:09:50.129 "superblock": true, 00:09:50.129 "num_base_bdevs": 3, 00:09:50.129 "num_base_bdevs_discovered": 3, 00:09:50.129 "num_base_bdevs_operational": 3, 00:09:50.129 "base_bdevs_list": [ 00:09:50.129 { 00:09:50.129 "name": "pt1", 00:09:50.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.129 "is_configured": true, 00:09:50.129 "data_offset": 2048, 00:09:50.129 "data_size": 63488 00:09:50.129 }, 00:09:50.129 { 00:09:50.129 "name": "pt2", 00:09:50.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.129 "is_configured": true, 00:09:50.129 "data_offset": 2048, 00:09:50.129 "data_size": 63488 00:09:50.129 }, 00:09:50.129 { 00:09:50.129 "name": "pt3", 00:09:50.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.129 "is_configured": true, 00:09:50.129 "data_offset": 2048, 00:09:50.129 "data_size": 63488 00:09:50.129 } 00:09:50.129 ] 00:09:50.129 }' 00:09:50.129 02:34:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.129 02:34:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:50.388 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:50.648 [2024-07-25 02:34:37.304865] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:50.648 "name": "raid_bdev1", 00:09:50.648 "aliases": [ 00:09:50.648 "6f3852c8-4a2e-11ef-9c8e-7947904e2597" 00:09:50.648 ], 00:09:50.648 "product_name": "Raid Volume", 00:09:50.648 "block_size": 512, 00:09:50.648 "num_blocks": 190464, 00:09:50.648 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:50.648 "assigned_rate_limits": { 00:09:50.648 "rw_ios_per_sec": 0, 00:09:50.648 "rw_mbytes_per_sec": 0, 00:09:50.648 "r_mbytes_per_sec": 0, 00:09:50.648 "w_mbytes_per_sec": 0 00:09:50.648 }, 00:09:50.648 "claimed": false, 00:09:50.648 "zoned": false, 00:09:50.648 "supported_io_types": { 00:09:50.648 "read": true, 00:09:50.648 "write": true, 00:09:50.648 "unmap": true, 00:09:50.648 "flush": true, 00:09:50.648 "reset": true, 00:09:50.648 "nvme_admin": false, 00:09:50.648 "nvme_io": false, 00:09:50.648 "nvme_io_md": false, 00:09:50.648 "write_zeroes": true, 00:09:50.648 "zcopy": false, 00:09:50.648 "get_zone_info": false, 00:09:50.648 "zone_management": false, 00:09:50.648 "zone_append": false, 00:09:50.648 "compare": false, 00:09:50.648 "compare_and_write": false, 00:09:50.648 "abort": false, 00:09:50.648 "seek_hole": false, 00:09:50.648 "seek_data": false, 00:09:50.648 "copy": false, 00:09:50.648 "nvme_iov_md": false 00:09:50.648 }, 00:09:50.648 "memory_domains": [ 00:09:50.648 { 00:09:50.648 "dma_device_id": "system", 00:09:50.648 "dma_device_type": 1 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.648 "dma_device_type": 2 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "dma_device_id": "system", 00:09:50.648 "dma_device_type": 1 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.648 "dma_device_type": 2 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "dma_device_id": "system", 00:09:50.648 "dma_device_type": 1 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.648 "dma_device_type": 2 00:09:50.648 } 00:09:50.648 ], 00:09:50.648 "driver_specific": { 00:09:50.648 "raid": { 00:09:50.648 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:50.648 "strip_size_kb": 64, 00:09:50.648 "state": "online", 00:09:50.648 "raid_level": "concat", 00:09:50.648 "superblock": true, 00:09:50.648 "num_base_bdevs": 3, 00:09:50.648 "num_base_bdevs_discovered": 3, 00:09:50.648 "num_base_bdevs_operational": 3, 00:09:50.648 "base_bdevs_list": [ 00:09:50.648 { 00:09:50.648 "name": "pt1", 00:09:50.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.648 "is_configured": true, 00:09:50.648 "data_offset": 2048, 00:09:50.648 "data_size": 63488 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "name": "pt2", 00:09:50.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.648 "is_configured": true, 00:09:50.648 "data_offset": 2048, 00:09:50.648 "data_size": 63488 00:09:50.648 }, 00:09:50.648 { 00:09:50.648 "name": "pt3", 00:09:50.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.648 "is_configured": true, 00:09:50.648 "data_offset": 2048, 00:09:50.648 "data_size": 63488 00:09:50.648 } 00:09:50.648 ] 00:09:50.648 } 00:09:50.648 } 00:09:50.648 }' 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:50.648 pt2 00:09:50.648 pt3' 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.648 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.648 "name": "pt1", 00:09:50.648 "aliases": [ 00:09:50.648 "00000000-0000-0000-0000-000000000001" 00:09:50.648 ], 00:09:50.649 "product_name": "passthru", 00:09:50.649 "block_size": 512, 00:09:50.649 "num_blocks": 65536, 00:09:50.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.649 "assigned_rate_limits": { 00:09:50.649 "rw_ios_per_sec": 0, 00:09:50.649 "rw_mbytes_per_sec": 0, 00:09:50.649 "r_mbytes_per_sec": 0, 00:09:50.649 "w_mbytes_per_sec": 0 00:09:50.649 }, 00:09:50.649 "claimed": true, 00:09:50.649 "claim_type": "exclusive_write", 00:09:50.649 "zoned": false, 00:09:50.649 "supported_io_types": { 00:09:50.649 "read": true, 00:09:50.649 "write": true, 00:09:50.649 "unmap": true, 00:09:50.649 "flush": true, 00:09:50.649 "reset": true, 00:09:50.649 "nvme_admin": false, 00:09:50.649 "nvme_io": false, 00:09:50.649 "nvme_io_md": false, 00:09:50.649 "write_zeroes": true, 00:09:50.649 "zcopy": true, 00:09:50.649 "get_zone_info": false, 00:09:50.649 "zone_management": false, 00:09:50.649 "zone_append": false, 00:09:50.649 "compare": false, 00:09:50.649 "compare_and_write": false, 00:09:50.649 "abort": true, 00:09:50.649 "seek_hole": false, 00:09:50.649 "seek_data": false, 00:09:50.649 "copy": true, 00:09:50.649 "nvme_iov_md": false 00:09:50.649 }, 00:09:50.649 "memory_domains": [ 00:09:50.649 { 00:09:50.649 "dma_device_id": "system", 00:09:50.649 "dma_device_type": 1 00:09:50.649 }, 00:09:50.649 { 00:09:50.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.649 "dma_device_type": 2 00:09:50.649 } 00:09:50.649 ], 00:09:50.649 "driver_specific": { 00:09:50.649 "passthru": { 00:09:50.649 "name": "pt1", 00:09:50.649 "base_bdev_name": "malloc1" 00:09:50.649 } 00:09:50.649 } 00:09:50.649 }' 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:50.649 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:50.908 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:50.908 "name": "pt2", 00:09:50.908 "aliases": [ 00:09:50.908 "00000000-0000-0000-0000-000000000002" 00:09:50.908 ], 00:09:50.908 "product_name": "passthru", 00:09:50.908 "block_size": 512, 00:09:50.908 "num_blocks": 65536, 00:09:50.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.908 "assigned_rate_limits": { 00:09:50.908 "rw_ios_per_sec": 0, 00:09:50.908 "rw_mbytes_per_sec": 0, 00:09:50.908 "r_mbytes_per_sec": 0, 00:09:50.908 "w_mbytes_per_sec": 0 00:09:50.908 }, 00:09:50.908 "claimed": true, 00:09:50.908 "claim_type": "exclusive_write", 00:09:50.909 "zoned": false, 00:09:50.909 "supported_io_types": { 00:09:50.909 "read": true, 00:09:50.909 "write": true, 00:09:50.909 "unmap": true, 00:09:50.909 "flush": true, 00:09:50.909 "reset": true, 00:09:50.909 "nvme_admin": false, 00:09:50.909 "nvme_io": false, 00:09:50.909 "nvme_io_md": false, 00:09:50.909 "write_zeroes": true, 00:09:50.909 "zcopy": true, 00:09:50.909 "get_zone_info": false, 00:09:50.909 "zone_management": false, 00:09:50.909 "zone_append": false, 00:09:50.909 "compare": false, 00:09:50.909 "compare_and_write": false, 00:09:50.909 "abort": true, 00:09:50.909 "seek_hole": false, 00:09:50.909 "seek_data": false, 00:09:50.909 "copy": true, 00:09:50.909 "nvme_iov_md": false 00:09:50.909 }, 00:09:50.909 "memory_domains": [ 00:09:50.909 { 00:09:50.909 "dma_device_id": "system", 00:09:50.909 "dma_device_type": 1 00:09:50.909 }, 00:09:50.909 { 00:09:50.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.909 "dma_device_type": 2 00:09:50.909 } 00:09:50.909 ], 00:09:50.909 "driver_specific": { 00:09:50.909 "passthru": { 00:09:50.909 "name": "pt2", 00:09:50.909 "base_bdev_name": "malloc2" 00:09:50.909 } 00:09:50.909 } 00:09:50.909 }' 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:50.909 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:51.168 02:34:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:51.168 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:51.168 "name": "pt3", 00:09:51.168 "aliases": [ 00:09:51.168 "00000000-0000-0000-0000-000000000003" 00:09:51.168 ], 00:09:51.168 "product_name": "passthru", 00:09:51.168 "block_size": 512, 00:09:51.168 "num_blocks": 65536, 00:09:51.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.169 "assigned_rate_limits": { 00:09:51.169 "rw_ios_per_sec": 0, 00:09:51.169 "rw_mbytes_per_sec": 0, 00:09:51.169 "r_mbytes_per_sec": 0, 00:09:51.169 "w_mbytes_per_sec": 0 00:09:51.169 }, 00:09:51.169 "claimed": true, 00:09:51.169 "claim_type": "exclusive_write", 00:09:51.169 "zoned": false, 00:09:51.169 "supported_io_types": { 00:09:51.169 "read": true, 00:09:51.169 "write": true, 00:09:51.169 "unmap": true, 00:09:51.169 "flush": true, 00:09:51.169 "reset": true, 00:09:51.169 "nvme_admin": false, 00:09:51.169 "nvme_io": false, 00:09:51.169 "nvme_io_md": false, 00:09:51.169 "write_zeroes": true, 00:09:51.169 "zcopy": true, 00:09:51.169 "get_zone_info": false, 00:09:51.169 "zone_management": false, 00:09:51.169 "zone_append": false, 00:09:51.169 "compare": false, 00:09:51.169 "compare_and_write": false, 00:09:51.169 "abort": true, 00:09:51.169 "seek_hole": false, 00:09:51.169 "seek_data": false, 00:09:51.169 "copy": true, 00:09:51.169 "nvme_iov_md": false 00:09:51.169 }, 00:09:51.169 "memory_domains": [ 00:09:51.169 { 00:09:51.169 "dma_device_id": "system", 00:09:51.169 "dma_device_type": 1 00:09:51.169 }, 00:09:51.169 { 00:09:51.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.169 "dma_device_type": 2 00:09:51.169 } 00:09:51.169 ], 00:09:51.169 "driver_specific": { 00:09:51.169 "passthru": { 00:09:51.169 "name": "pt3", 00:09:51.169 "base_bdev_name": "malloc3" 00:09:51.169 } 00:09:51.169 } 00:09:51.169 }' 00:09:51.169 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:51.169 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:51.169 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:51.169 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:51.169 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:51.429 [2024-07-25 02:34:38.288935] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6f3852c8-4a2e-11ef-9c8e-7947904e2597 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6f3852c8-4a2e-11ef-9c8e-7947904e2597 ']' 00:09:51.429 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:51.688 [2024-07-25 02:34:38.468922] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.688 [2024-07-25 02:34:38.468935] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.688 [2024-07-25 02:34:38.468949] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.688 [2024-07-25 02:34:38.468959] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.688 [2024-07-25 02:34:38.468962] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd027435400 name raid_bdev1, state offline 00:09:51.688 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.688 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:51.948 02:34:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:52.208 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:52.208 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:52.467 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:52.727 [2024-07-25 02:34:39.528997] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:52.727 [2024-07-25 02:34:39.529469] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:52.727 [2024-07-25 02:34:39.529487] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:52.727 [2024-07-25 02:34:39.529498] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:52.727 [2024-07-25 02:34:39.529525] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:52.727 [2024-07-25 02:34:39.529532] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:52.727 [2024-07-25 02:34:39.529538] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.727 [2024-07-25 02:34:39.529542] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd027435180 name raid_bdev1, state configuring 00:09:52.727 request: 00:09:52.727 { 00:09:52.727 "name": "raid_bdev1", 00:09:52.727 "raid_level": "concat", 00:09:52.727 "base_bdevs": [ 00:09:52.727 "malloc1", 00:09:52.727 "malloc2", 00:09:52.727 "malloc3" 00:09:52.727 ], 00:09:52.727 "strip_size_kb": 64, 00:09:52.727 "superblock": false, 00:09:52.727 "method": "bdev_raid_create", 00:09:52.727 "req_id": 1 00:09:52.727 } 00:09:52.727 Got JSON-RPC error response 00:09:52.727 response: 00:09:52.727 { 00:09:52.727 "code": -17, 00:09:52.727 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:52.727 } 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.727 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.987 [2024-07-25 02:34:39.881020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:52.987 [2024-07-25 02:34:39.881051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.987 [2024-07-25 02:34:39.881075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027434c80 00:09:52.987 [2024-07-25 02:34:39.881081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.987 [2024-07-25 02:34:39.881552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.987 [2024-07-25 02:34:39.881579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:52.987 [2024-07-25 02:34:39.881596] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:52.987 [2024-07-25 02:34:39.881605] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:52.987 pt1 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.987 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.247 02:34:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.247 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.247 "name": "raid_bdev1", 00:09:53.247 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:53.247 "strip_size_kb": 64, 00:09:53.247 "state": "configuring", 00:09:53.247 "raid_level": "concat", 00:09:53.247 "superblock": true, 00:09:53.247 "num_base_bdevs": 3, 00:09:53.247 "num_base_bdevs_discovered": 1, 00:09:53.247 "num_base_bdevs_operational": 3, 00:09:53.247 "base_bdevs_list": [ 00:09:53.247 { 00:09:53.247 "name": "pt1", 00:09:53.247 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.247 "is_configured": true, 00:09:53.247 "data_offset": 2048, 00:09:53.247 "data_size": 63488 00:09:53.247 }, 00:09:53.247 { 00:09:53.247 "name": null, 00:09:53.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.248 "is_configured": false, 00:09:53.248 "data_offset": 2048, 00:09:53.248 "data_size": 63488 00:09:53.248 }, 00:09:53.248 { 00:09:53.248 "name": null, 00:09:53.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.248 "is_configured": false, 00:09:53.248 "data_offset": 2048, 00:09:53.248 "data_size": 63488 00:09:53.248 } 00:09:53.248 ] 00:09:53.248 }' 00:09:53.248 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.248 02:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.508 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:09:53.508 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.767 [2024-07-25 02:34:40.525060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.767 [2024-07-25 02:34:40.525093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.767 [2024-07-25 02:34:40.525101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027435680 00:09:53.767 [2024-07-25 02:34:40.525106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.767 [2024-07-25 02:34:40.525198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.767 [2024-07-25 02:34:40.525204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.767 [2024-07-25 02:34:40.525218] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:53.767 [2024-07-25 02:34:40.525225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.767 pt2 00:09:53.767 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:54.027 [2024-07-25 02:34:40.705073] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.027 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.028 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.028 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.028 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.028 "name": "raid_bdev1", 00:09:54.028 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:54.028 "strip_size_kb": 64, 00:09:54.028 "state": "configuring", 00:09:54.028 "raid_level": "concat", 00:09:54.028 "superblock": true, 00:09:54.028 "num_base_bdevs": 3, 00:09:54.028 "num_base_bdevs_discovered": 1, 00:09:54.028 "num_base_bdevs_operational": 3, 00:09:54.028 "base_bdevs_list": [ 00:09:54.028 { 00:09:54.028 "name": "pt1", 00:09:54.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.028 "is_configured": true, 00:09:54.028 "data_offset": 2048, 00:09:54.028 "data_size": 63488 00:09:54.028 }, 00:09:54.028 { 00:09:54.028 "name": null, 00:09:54.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.028 "is_configured": false, 00:09:54.028 "data_offset": 2048, 00:09:54.028 "data_size": 63488 00:09:54.028 }, 00:09:54.028 { 00:09:54.028 "name": null, 00:09:54.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.028 "is_configured": false, 00:09:54.028 "data_offset": 2048, 00:09:54.028 "data_size": 63488 00:09:54.028 } 00:09:54.028 ] 00:09:54.028 }' 00:09:54.028 02:34:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.028 02:34:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.288 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:54.288 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:54.288 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.548 [2024-07-25 02:34:41.337114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.548 [2024-07-25 02:34:41.337144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.548 [2024-07-25 02:34:41.337150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027435680 00:09:54.548 [2024-07-25 02:34:41.337156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.548 [2024-07-25 02:34:41.337239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.548 [2024-07-25 02:34:41.337249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.548 [2024-07-25 02:34:41.337264] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.548 [2024-07-25 02:34:41.337272] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.548 pt2 00:09:54.548 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:54.548 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:54.548 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.808 [2024-07-25 02:34:41.521135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.808 [2024-07-25 02:34:41.521165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.808 [2024-07-25 02:34:41.521171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fd027435400 00:09:54.808 [2024-07-25 02:34:41.521176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.808 [2024-07-25 02:34:41.521250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.808 [2024-07-25 02:34:41.521256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.808 [2024-07-25 02:34:41.521268] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:54.808 [2024-07-25 02:34:41.521273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.808 [2024-07-25 02:34:41.521289] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fd027434780 00:09:54.808 [2024-07-25 02:34:41.521292] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:54.808 [2024-07-25 02:34:41.521308] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fd027497e20 00:09:54.808 [2024-07-25 02:34:41.521341] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fd027434780 00:09:54.808 [2024-07-25 02:34:41.521344] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fd027434780 00:09:54.808 [2024-07-25 02:34:41.521359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.808 pt3 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.808 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.068 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.068 "name": "raid_bdev1", 00:09:55.068 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:55.068 "strip_size_kb": 64, 00:09:55.068 "state": "online", 00:09:55.068 "raid_level": "concat", 00:09:55.068 "superblock": true, 00:09:55.068 "num_base_bdevs": 3, 00:09:55.068 "num_base_bdevs_discovered": 3, 00:09:55.068 "num_base_bdevs_operational": 3, 00:09:55.068 "base_bdevs_list": [ 00:09:55.068 { 00:09:55.068 "name": "pt1", 00:09:55.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.068 "is_configured": true, 00:09:55.068 "data_offset": 2048, 00:09:55.068 "data_size": 63488 00:09:55.068 }, 00:09:55.068 { 00:09:55.068 "name": "pt2", 00:09:55.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.068 "is_configured": true, 00:09:55.068 "data_offset": 2048, 00:09:55.068 "data_size": 63488 00:09:55.068 }, 00:09:55.068 { 00:09:55.068 "name": "pt3", 00:09:55.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.068 "is_configured": true, 00:09:55.068 "data_offset": 2048, 00:09:55.068 "data_size": 63488 00:09:55.068 } 00:09:55.068 ] 00:09:55.068 }' 00:09:55.068 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.068 02:34:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:55.328 02:34:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:55.328 [2024-07-25 02:34:42.161218] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.328 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:55.328 "name": "raid_bdev1", 00:09:55.328 "aliases": [ 00:09:55.328 "6f3852c8-4a2e-11ef-9c8e-7947904e2597" 00:09:55.328 ], 00:09:55.328 "product_name": "Raid Volume", 00:09:55.328 "block_size": 512, 00:09:55.328 "num_blocks": 190464, 00:09:55.328 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:55.328 "assigned_rate_limits": { 00:09:55.328 "rw_ios_per_sec": 0, 00:09:55.328 "rw_mbytes_per_sec": 0, 00:09:55.328 "r_mbytes_per_sec": 0, 00:09:55.328 "w_mbytes_per_sec": 0 00:09:55.328 }, 00:09:55.328 "claimed": false, 00:09:55.328 "zoned": false, 00:09:55.328 "supported_io_types": { 00:09:55.328 "read": true, 00:09:55.328 "write": true, 00:09:55.328 "unmap": true, 00:09:55.328 "flush": true, 00:09:55.328 "reset": true, 00:09:55.328 "nvme_admin": false, 00:09:55.328 "nvme_io": false, 00:09:55.328 "nvme_io_md": false, 00:09:55.328 "write_zeroes": true, 00:09:55.328 "zcopy": false, 00:09:55.328 "get_zone_info": false, 00:09:55.328 "zone_management": false, 00:09:55.328 "zone_append": false, 00:09:55.328 "compare": false, 00:09:55.328 "compare_and_write": false, 00:09:55.328 "abort": false, 00:09:55.328 "seek_hole": false, 00:09:55.328 "seek_data": false, 00:09:55.328 "copy": false, 00:09:55.328 "nvme_iov_md": false 00:09:55.328 }, 00:09:55.328 "memory_domains": [ 00:09:55.328 { 00:09:55.328 "dma_device_id": "system", 00:09:55.328 "dma_device_type": 1 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.328 "dma_device_type": 2 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "system", 00:09:55.328 "dma_device_type": 1 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.328 "dma_device_type": 2 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "system", 00:09:55.328 "dma_device_type": 1 00:09:55.328 }, 00:09:55.328 { 00:09:55.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.329 "dma_device_type": 2 00:09:55.329 } 00:09:55.329 ], 00:09:55.329 "driver_specific": { 00:09:55.329 "raid": { 00:09:55.329 "uuid": "6f3852c8-4a2e-11ef-9c8e-7947904e2597", 00:09:55.329 "strip_size_kb": 64, 00:09:55.329 "state": "online", 00:09:55.329 "raid_level": "concat", 00:09:55.329 "superblock": true, 00:09:55.329 "num_base_bdevs": 3, 00:09:55.329 "num_base_bdevs_discovered": 3, 00:09:55.329 "num_base_bdevs_operational": 3, 00:09:55.329 "base_bdevs_list": [ 00:09:55.329 { 00:09:55.329 "name": "pt1", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "name": "pt2", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 }, 00:09:55.329 { 00:09:55.329 "name": "pt3", 00:09:55.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.329 "is_configured": true, 00:09:55.329 "data_offset": 2048, 00:09:55.329 "data_size": 63488 00:09:55.329 } 00:09:55.329 ] 00:09:55.329 } 00:09:55.329 } 00:09:55.329 }' 00:09:55.329 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.329 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:55.329 pt2 00:09:55.329 pt3' 00:09:55.329 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.329 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:55.329 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:55.589 "name": "pt1", 00:09:55.589 "aliases": [ 00:09:55.589 "00000000-0000-0000-0000-000000000001" 00:09:55.589 ], 00:09:55.589 "product_name": "passthru", 00:09:55.589 "block_size": 512, 00:09:55.589 "num_blocks": 65536, 00:09:55.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.589 "assigned_rate_limits": { 00:09:55.589 "rw_ios_per_sec": 0, 00:09:55.589 "rw_mbytes_per_sec": 0, 00:09:55.589 "r_mbytes_per_sec": 0, 00:09:55.589 "w_mbytes_per_sec": 0 00:09:55.589 }, 00:09:55.589 "claimed": true, 00:09:55.589 "claim_type": "exclusive_write", 00:09:55.589 "zoned": false, 00:09:55.589 "supported_io_types": { 00:09:55.589 "read": true, 00:09:55.589 "write": true, 00:09:55.589 "unmap": true, 00:09:55.589 "flush": true, 00:09:55.589 "reset": true, 00:09:55.589 "nvme_admin": false, 00:09:55.589 "nvme_io": false, 00:09:55.589 "nvme_io_md": false, 00:09:55.589 "write_zeroes": true, 00:09:55.589 "zcopy": true, 00:09:55.589 "get_zone_info": false, 00:09:55.589 "zone_management": false, 00:09:55.589 "zone_append": false, 00:09:55.589 "compare": false, 00:09:55.589 "compare_and_write": false, 00:09:55.589 "abort": true, 00:09:55.589 "seek_hole": false, 00:09:55.589 "seek_data": false, 00:09:55.589 "copy": true, 00:09:55.589 "nvme_iov_md": false 00:09:55.589 }, 00:09:55.589 "memory_domains": [ 00:09:55.589 { 00:09:55.589 "dma_device_id": "system", 00:09:55.589 "dma_device_type": 1 00:09:55.589 }, 00:09:55.589 { 00:09:55.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.589 "dma_device_type": 2 00:09:55.589 } 00:09:55.589 ], 00:09:55.589 "driver_specific": { 00:09:55.589 "passthru": { 00:09:55.589 "name": "pt1", 00:09:55.589 "base_bdev_name": "malloc1" 00:09:55.589 } 00:09:55.589 } 00:09:55.589 }' 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:55.589 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:55.849 "name": "pt2", 00:09:55.849 "aliases": [ 00:09:55.849 "00000000-0000-0000-0000-000000000002" 00:09:55.849 ], 00:09:55.849 "product_name": "passthru", 00:09:55.849 "block_size": 512, 00:09:55.849 "num_blocks": 65536, 00:09:55.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.849 "assigned_rate_limits": { 00:09:55.849 "rw_ios_per_sec": 0, 00:09:55.849 "rw_mbytes_per_sec": 0, 00:09:55.849 "r_mbytes_per_sec": 0, 00:09:55.849 "w_mbytes_per_sec": 0 00:09:55.849 }, 00:09:55.849 "claimed": true, 00:09:55.849 "claim_type": "exclusive_write", 00:09:55.849 "zoned": false, 00:09:55.849 "supported_io_types": { 00:09:55.849 "read": true, 00:09:55.849 "write": true, 00:09:55.849 "unmap": true, 00:09:55.849 "flush": true, 00:09:55.849 "reset": true, 00:09:55.849 "nvme_admin": false, 00:09:55.849 "nvme_io": false, 00:09:55.849 "nvme_io_md": false, 00:09:55.849 "write_zeroes": true, 00:09:55.849 "zcopy": true, 00:09:55.849 "get_zone_info": false, 00:09:55.849 "zone_management": false, 00:09:55.849 "zone_append": false, 00:09:55.849 "compare": false, 00:09:55.849 "compare_and_write": false, 00:09:55.849 "abort": true, 00:09:55.849 "seek_hole": false, 00:09:55.849 "seek_data": false, 00:09:55.849 "copy": true, 00:09:55.849 "nvme_iov_md": false 00:09:55.849 }, 00:09:55.849 "memory_domains": [ 00:09:55.849 { 00:09:55.849 "dma_device_id": "system", 00:09:55.849 "dma_device_type": 1 00:09:55.849 }, 00:09:55.849 { 00:09:55.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.849 "dma_device_type": 2 00:09:55.849 } 00:09:55.849 ], 00:09:55.849 "driver_specific": { 00:09:55.849 "passthru": { 00:09:55.849 "name": "pt2", 00:09:55.849 "base_bdev_name": "malloc2" 00:09:55.849 } 00:09:55.849 } 00:09:55.849 }' 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:55.849 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:56.109 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:56.109 "name": "pt3", 00:09:56.109 "aliases": [ 00:09:56.109 "00000000-0000-0000-0000-000000000003" 00:09:56.109 ], 00:09:56.109 "product_name": "passthru", 00:09:56.109 "block_size": 512, 00:09:56.109 "num_blocks": 65536, 00:09:56.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.109 "assigned_rate_limits": { 00:09:56.109 "rw_ios_per_sec": 0, 00:09:56.109 "rw_mbytes_per_sec": 0, 00:09:56.109 "r_mbytes_per_sec": 0, 00:09:56.109 "w_mbytes_per_sec": 0 00:09:56.109 }, 00:09:56.109 "claimed": true, 00:09:56.109 "claim_type": "exclusive_write", 00:09:56.109 "zoned": false, 00:09:56.109 "supported_io_types": { 00:09:56.109 "read": true, 00:09:56.109 "write": true, 00:09:56.109 "unmap": true, 00:09:56.109 "flush": true, 00:09:56.109 "reset": true, 00:09:56.109 "nvme_admin": false, 00:09:56.109 "nvme_io": false, 00:09:56.109 "nvme_io_md": false, 00:09:56.109 "write_zeroes": true, 00:09:56.109 "zcopy": true, 00:09:56.109 "get_zone_info": false, 00:09:56.109 "zone_management": false, 00:09:56.109 "zone_append": false, 00:09:56.109 "compare": false, 00:09:56.109 "compare_and_write": false, 00:09:56.109 "abort": true, 00:09:56.109 "seek_hole": false, 00:09:56.109 "seek_data": false, 00:09:56.109 "copy": true, 00:09:56.109 "nvme_iov_md": false 00:09:56.109 }, 00:09:56.109 "memory_domains": [ 00:09:56.109 { 00:09:56.109 "dma_device_id": "system", 00:09:56.109 "dma_device_type": 1 00:09:56.109 }, 00:09:56.109 { 00:09:56.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.109 "dma_device_type": 2 00:09:56.109 } 00:09:56.109 ], 00:09:56.109 "driver_specific": { 00:09:56.109 "passthru": { 00:09:56.109 "name": "pt3", 00:09:56.109 "base_bdev_name": "malloc3" 00:09:56.109 } 00:09:56.109 } 00:09:56.109 }' 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.110 02:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.110 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:56.110 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:56.110 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:56.370 [2024-07-25 02:34:43.173282] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6f3852c8-4a2e-11ef-9c8e-7947904e2597 '!=' 6f3852c8-4a2e-11ef-9c8e-7947904e2597 ']' 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55279 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55279 ']' 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55279 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55279 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:56.370 killing process with pid 55279 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55279' 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55279 00:09:56.370 [2024-07-25 02:34:43.204718] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.370 [2024-07-25 02:34:43.204734] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.370 [2024-07-25 02:34:43.204756] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.370 [2024-07-25 02:34:43.204759] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fd027434780 name raid_bdev1, state offline 00:09:56.370 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55279 00:09:56.370 [2024-07-25 02:34:43.218756] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.629 02:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:56.629 00:09:56.629 real 0m8.818s 00:09:56.629 user 0m15.467s 00:09:56.629 sys 0m1.476s 00:09:56.629 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.629 02:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.629 ************************************ 00:09:56.629 END TEST raid_superblock_test 00:09:56.629 ************************************ 00:09:56.629 02:34:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:56.629 02:34:43 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:56.629 02:34:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:56.629 02:34:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.629 02:34:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.629 ************************************ 00:09:56.629 START TEST raid_read_error_test 00:09:56.629 ************************************ 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GTiAXZ8pVe 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55618 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55618 /var/tmp/spdk-raid.sock 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55618 ']' 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.629 02:34:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.629 [2024-07-25 02:34:43.476303] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:09:56.629 [2024-07-25 02:34:43.476603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:57.196 EAL: TSC is not safe to use in SMP mode 00:09:57.196 EAL: TSC is not invariant 00:09:57.196 [2024-07-25 02:34:43.891329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.196 [2024-07-25 02:34:43.984461] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:57.196 [2024-07-25 02:34:43.986120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.196 [2024-07-25 02:34:43.986701] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.196 [2024-07-25 02:34:43.986712] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.763 02:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.764 02:34:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:57.764 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:57.764 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.764 BaseBdev1_malloc 00:09:57.764 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:58.022 true 00:09:58.022 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:58.022 [2024-07-25 02:34:44.909546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:58.022 [2024-07-25 02:34:44.909603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.022 [2024-07-25 02:34:44.909621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151dcbe34780 00:09:58.022 [2024-07-25 02:34:44.909627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.022 [2024-07-25 02:34:44.910054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.022 [2024-07-25 02:34:44.910079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:58.022 BaseBdev1 00:09:58.022 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:58.022 02:34:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.281 BaseBdev2_malloc 00:09:58.281 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:58.540 true 00:09:58.540 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.540 [2024-07-25 02:34:45.437597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.540 [2024-07-25 02:34:45.437634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.540 [2024-07-25 02:34:45.437669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151dcbe34c80 00:09:58.540 [2024-07-25 02:34:45.437675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.540 [2024-07-25 02:34:45.438087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.540 [2024-07-25 02:34:45.438114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.540 BaseBdev2 00:09:58.799 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:58.799 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.799 BaseBdev3_malloc 00:09:58.799 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:59.058 true 00:09:59.058 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.318 [2024-07-25 02:34:45.981633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.318 [2024-07-25 02:34:45.981672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.318 [2024-07-25 02:34:45.981691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151dcbe35180 00:09:59.318 [2024-07-25 02:34:45.981713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.318 [2024-07-25 02:34:45.982136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.318 [2024-07-25 02:34:45.982161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.318 BaseBdev3 00:09:59.318 02:34:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:59.318 [2024-07-25 02:34:46.161657] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.318 [2024-07-25 02:34:46.162050] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.318 [2024-07-25 02:34:46.162089] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.318 [2024-07-25 02:34:46.162137] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x151dcbe35400 00:09:59.318 [2024-07-25 02:34:46.162142] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.318 [2024-07-25 02:34:46.162173] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x151dcbea0e20 00:09:59.318 [2024-07-25 02:34:46.162224] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x151dcbe35400 00:09:59.318 [2024-07-25 02:34:46.162231] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x151dcbe35400 00:09:59.318 [2024-07-25 02:34:46.162248] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.318 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.576 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.576 "name": "raid_bdev1", 00:09:59.576 "uuid": "74e0d18f-4a2e-11ef-9c8e-7947904e2597", 00:09:59.576 "strip_size_kb": 64, 00:09:59.576 "state": "online", 00:09:59.576 "raid_level": "concat", 00:09:59.576 "superblock": true, 00:09:59.576 "num_base_bdevs": 3, 00:09:59.576 "num_base_bdevs_discovered": 3, 00:09:59.576 "num_base_bdevs_operational": 3, 00:09:59.576 "base_bdevs_list": [ 00:09:59.576 { 00:09:59.576 "name": "BaseBdev1", 00:09:59.576 "uuid": "020e505c-1243-dc5a-9356-f0022c5a9efa", 00:09:59.576 "is_configured": true, 00:09:59.576 "data_offset": 2048, 00:09:59.576 "data_size": 63488 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "name": "BaseBdev2", 00:09:59.576 "uuid": "8fc091ac-ee9f-a05c-ae6c-f29a95da578c", 00:09:59.576 "is_configured": true, 00:09:59.576 "data_offset": 2048, 00:09:59.576 "data_size": 63488 00:09:59.576 }, 00:09:59.576 { 00:09:59.576 "name": "BaseBdev3", 00:09:59.576 "uuid": "1f7301c0-6b61-de5e-a45e-fd0c1c40dbb3", 00:09:59.576 "is_configured": true, 00:09:59.576 "data_offset": 2048, 00:09:59.576 "data_size": 63488 00:09:59.576 } 00:09:59.576 ] 00:09:59.576 }' 00:09:59.576 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.576 02:34:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.834 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:59.834 02:34:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:59.834 [2024-07-25 02:34:46.701744] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x151dcbea0ec0 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.213 02:34:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.213 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:01.213 "name": "raid_bdev1", 00:10:01.213 "uuid": "74e0d18f-4a2e-11ef-9c8e-7947904e2597", 00:10:01.213 "strip_size_kb": 64, 00:10:01.213 "state": "online", 00:10:01.213 "raid_level": "concat", 00:10:01.213 "superblock": true, 00:10:01.213 "num_base_bdevs": 3, 00:10:01.213 "num_base_bdevs_discovered": 3, 00:10:01.213 "num_base_bdevs_operational": 3, 00:10:01.213 "base_bdevs_list": [ 00:10:01.213 { 00:10:01.213 "name": "BaseBdev1", 00:10:01.213 "uuid": "020e505c-1243-dc5a-9356-f0022c5a9efa", 00:10:01.213 "is_configured": true, 00:10:01.213 "data_offset": 2048, 00:10:01.213 "data_size": 63488 00:10:01.213 }, 00:10:01.213 { 00:10:01.213 "name": "BaseBdev2", 00:10:01.213 "uuid": "8fc091ac-ee9f-a05c-ae6c-f29a95da578c", 00:10:01.213 "is_configured": true, 00:10:01.213 "data_offset": 2048, 00:10:01.213 "data_size": 63488 00:10:01.213 }, 00:10:01.213 { 00:10:01.213 "name": "BaseBdev3", 00:10:01.213 "uuid": "1f7301c0-6b61-de5e-a45e-fd0c1c40dbb3", 00:10:01.213 "is_configured": true, 00:10:01.213 "data_offset": 2048, 00:10:01.213 "data_size": 63488 00:10:01.213 } 00:10:01.213 ] 00:10:01.213 }' 00:10:01.213 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:01.213 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.473 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:01.733 [2024-07-25 02:34:48.542405] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.733 [2024-07-25 02:34:48.542431] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.733 [2024-07-25 02:34:48.542703] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.733 [2024-07-25 02:34:48.542710] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.733 [2024-07-25 02:34:48.542716] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.733 [2024-07-25 02:34:48.542720] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x151dcbe35400 name raid_bdev1, state offline 00:10:01.733 0 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55618 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55618 ']' 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55618 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55618 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:01.733 killing process with pid 55618 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55618' 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55618 00:10:01.733 [2024-07-25 02:34:48.573301] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.733 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55618 00:10:01.733 [2024-07-25 02:34:48.586986] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GTiAXZ8pVe 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.54 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.54 != \0\.\0\0 ]] 00:10:01.993 00:10:01.993 real 0m5.310s 00:10:01.993 user 0m7.983s 00:10:01.993 sys 0m0.897s 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.993 02:34:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.993 ************************************ 00:10:01.993 END TEST raid_read_error_test 00:10:01.993 ************************************ 00:10:01.993 02:34:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:01.993 02:34:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:01.993 02:34:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:01.993 02:34:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.993 02:34:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.993 ************************************ 00:10:01.993 START TEST raid_write_error_test 00:10:01.994 ************************************ 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.UsmjFEy2vA 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55745 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55745 /var/tmp/spdk-raid.sock 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55745 ']' 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.994 02:34:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.994 [2024-07-25 02:34:48.849006] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:10:01.994 [2024-07-25 02:34:48.849259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:02.564 EAL: TSC is not safe to use in SMP mode 00:10:02.564 EAL: TSC is not invariant 00:10:02.564 [2024-07-25 02:34:49.267752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.564 [2024-07-25 02:34:49.359555] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:02.564 [2024-07-25 02:34:49.361197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.564 [2024-07-25 02:34:49.361766] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.564 [2024-07-25 02:34:49.361776] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.134 02:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.134 02:34:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:03.134 02:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.134 02:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.134 BaseBdev1_malloc 00:10:03.134 02:34:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:03.393 true 00:10:03.393 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:03.393 [2024-07-25 02:34:50.280629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:03.393 [2024-07-25 02:34:50.280669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.393 [2024-07-25 02:34:50.280705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1473d3634780 00:10:03.393 [2024-07-25 02:34:50.280711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.393 [2024-07-25 02:34:50.281222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.393 [2024-07-25 02:34:50.281247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:03.393 BaseBdev1 00:10:03.393 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.393 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:03.653 BaseBdev2_malloc 00:10:03.653 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:03.912 true 00:10:03.912 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:03.912 [2024-07-25 02:34:50.820664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:03.912 [2024-07-25 02:34:50.820717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.912 [2024-07-25 02:34:50.820736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1473d3634c80 00:10:03.912 [2024-07-25 02:34:50.820742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.912 [2024-07-25 02:34:50.821269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.912 [2024-07-25 02:34:50.821297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.172 BaseBdev2 00:10:04.172 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.172 02:34:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.172 BaseBdev3_malloc 00:10:04.172 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:04.431 true 00:10:04.431 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.691 [2024-07-25 02:34:51.364708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.691 [2024-07-25 02:34:51.364760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.691 [2024-07-25 02:34:51.364779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1473d3635180 00:10:04.691 [2024-07-25 02:34:51.364785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.691 [2024-07-25 02:34:51.365272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.691 [2024-07-25 02:34:51.365296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.691 BaseBdev3 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:04.691 [2024-07-25 02:34:51.548729] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.691 [2024-07-25 02:34:51.549197] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.691 [2024-07-25 02:34:51.549219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.691 [2024-07-25 02:34:51.549266] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1473d3635400 00:10:04.691 [2024-07-25 02:34:51.549276] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:04.691 [2024-07-25 02:34:51.549305] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1473d36a0e20 00:10:04.691 [2024-07-25 02:34:51.549366] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1473d3635400 00:10:04.691 [2024-07-25 02:34:51.549373] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1473d3635400 00:10:04.691 [2024-07-25 02:34:51.549390] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.691 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.951 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.951 "name": "raid_bdev1", 00:10:04.951 "uuid": "7816d214-4a2e-11ef-9c8e-7947904e2597", 00:10:04.951 "strip_size_kb": 64, 00:10:04.951 "state": "online", 00:10:04.951 "raid_level": "concat", 00:10:04.951 "superblock": true, 00:10:04.951 "num_base_bdevs": 3, 00:10:04.951 "num_base_bdevs_discovered": 3, 00:10:04.951 "num_base_bdevs_operational": 3, 00:10:04.951 "base_bdevs_list": [ 00:10:04.951 { 00:10:04.951 "name": "BaseBdev1", 00:10:04.951 "uuid": "f11e4466-8de9-7154-ab24-f838a605371c", 00:10:04.951 "is_configured": true, 00:10:04.951 "data_offset": 2048, 00:10:04.951 "data_size": 63488 00:10:04.951 }, 00:10:04.951 { 00:10:04.951 "name": "BaseBdev2", 00:10:04.951 "uuid": "ba3aaccb-b9e1-1b5a-af1d-d6fbf6c3716e", 00:10:04.951 "is_configured": true, 00:10:04.951 "data_offset": 2048, 00:10:04.951 "data_size": 63488 00:10:04.951 }, 00:10:04.951 { 00:10:04.951 "name": "BaseBdev3", 00:10:04.951 "uuid": "5a0014a4-e6df-b251-8abd-596472d8d437", 00:10:04.951 "is_configured": true, 00:10:04.951 "data_offset": 2048, 00:10:04.951 "data_size": 63488 00:10:04.951 } 00:10:04.951 ] 00:10:04.951 }' 00:10:04.951 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.951 02:34:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.211 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:05.211 02:34:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:05.211 [2024-07-25 02:34:52.084824] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1473d36a0ec0 00:10:06.179 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.439 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.699 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.699 "name": "raid_bdev1", 00:10:06.699 "uuid": "7816d214-4a2e-11ef-9c8e-7947904e2597", 00:10:06.699 "strip_size_kb": 64, 00:10:06.699 "state": "online", 00:10:06.699 "raid_level": "concat", 00:10:06.699 "superblock": true, 00:10:06.699 "num_base_bdevs": 3, 00:10:06.699 "num_base_bdevs_discovered": 3, 00:10:06.699 "num_base_bdevs_operational": 3, 00:10:06.699 "base_bdevs_list": [ 00:10:06.699 { 00:10:06.699 "name": "BaseBdev1", 00:10:06.699 "uuid": "f11e4466-8de9-7154-ab24-f838a605371c", 00:10:06.699 "is_configured": true, 00:10:06.699 "data_offset": 2048, 00:10:06.699 "data_size": 63488 00:10:06.699 }, 00:10:06.699 { 00:10:06.699 "name": "BaseBdev2", 00:10:06.699 "uuid": "ba3aaccb-b9e1-1b5a-af1d-d6fbf6c3716e", 00:10:06.699 "is_configured": true, 00:10:06.699 "data_offset": 2048, 00:10:06.699 "data_size": 63488 00:10:06.699 }, 00:10:06.699 { 00:10:06.699 "name": "BaseBdev3", 00:10:06.699 "uuid": "5a0014a4-e6df-b251-8abd-596472d8d437", 00:10:06.699 "is_configured": true, 00:10:06.699 "data_offset": 2048, 00:10:06.699 "data_size": 63488 00:10:06.699 } 00:10:06.699 ] 00:10:06.699 }' 00:10:06.699 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.699 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.958 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:07.218 [2024-07-25 02:34:53.885060] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.218 [2024-07-25 02:34:53.885085] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.218 [2024-07-25 02:34:53.885373] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.218 [2024-07-25 02:34:53.885387] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.218 [2024-07-25 02:34:53.885394] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.218 [2024-07-25 02:34:53.885397] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1473d3635400 name raid_bdev1, state offline 00:10:07.218 0 00:10:07.218 02:34:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55745 00:10:07.218 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55745 ']' 00:10:07.218 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55745 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55745 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:07.219 killing process with pid 55745 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55745' 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55745 00:10:07.219 [2024-07-25 02:34:53.913478] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.219 02:34:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55745 00:10:07.219 [2024-07-25 02:34:53.927162] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.UsmjFEy2vA 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:10:07.219 00:10:07.219 real 0m5.280s 00:10:07.219 user 0m7.934s 00:10:07.219 sys 0m0.888s 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.219 02:34:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.219 ************************************ 00:10:07.219 END TEST raid_write_error_test 00:10:07.219 ************************************ 00:10:07.479 02:34:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:07.479 02:34:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:07.479 02:34:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:07.479 02:34:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:07.479 02:34:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.479 02:34:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.479 ************************************ 00:10:07.479 START TEST raid_state_function_test 00:10:07.479 ************************************ 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=55870 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 55870' 00:10:07.479 Process raid pid: 55870 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 55870 /var/tmp/spdk-raid.sock 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 55870 ']' 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.479 02:34:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.479 [2024-07-25 02:34:54.189207] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:10:07.479 [2024-07-25 02:34:54.189540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:07.739 EAL: TSC is not safe to use in SMP mode 00:10:07.739 EAL: TSC is not invariant 00:10:07.739 [2024-07-25 02:34:54.610963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.999 [2024-07-25 02:34:54.704172] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:07.999 [2024-07-25 02:34:54.705830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.999 [2024-07-25 02:34:54.706461] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.999 [2024-07-25 02:34:54.706471] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.259 02:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.259 02:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:08.259 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:08.519 [2024-07-25 02:34:55.257347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.519 [2024-07-25 02:34:55.257383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.519 [2024-07-25 02:34:55.257386] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.519 [2024-07-25 02:34:55.257392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.519 [2024-07-25 02:34:55.257394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.519 [2024-07-25 02:34:55.257399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.519 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.777 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.777 "name": "Existed_Raid", 00:10:08.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.777 "strip_size_kb": 0, 00:10:08.777 "state": "configuring", 00:10:08.777 "raid_level": "raid1", 00:10:08.777 "superblock": false, 00:10:08.777 "num_base_bdevs": 3, 00:10:08.778 "num_base_bdevs_discovered": 0, 00:10:08.778 "num_base_bdevs_operational": 3, 00:10:08.778 "base_bdevs_list": [ 00:10:08.778 { 00:10:08.778 "name": "BaseBdev1", 00:10:08.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.778 "is_configured": false, 00:10:08.778 "data_offset": 0, 00:10:08.778 "data_size": 0 00:10:08.778 }, 00:10:08.778 { 00:10:08.778 "name": "BaseBdev2", 00:10:08.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.778 "is_configured": false, 00:10:08.778 "data_offset": 0, 00:10:08.778 "data_size": 0 00:10:08.778 }, 00:10:08.778 { 00:10:08.778 "name": "BaseBdev3", 00:10:08.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.778 "is_configured": false, 00:10:08.778 "data_offset": 0, 00:10:08.778 "data_size": 0 00:10:08.778 } 00:10:08.778 ] 00:10:08.778 }' 00:10:08.778 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.778 02:34:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.039 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:09.039 [2024-07-25 02:34:55.889373] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.039 [2024-07-25 02:34:55.889388] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b97f434500 name Existed_Raid, state configuring 00:10:09.039 02:34:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:09.336 [2024-07-25 02:34:56.077405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.336 [2024-07-25 02:34:56.077439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.336 [2024-07-25 02:34:56.077442] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.336 [2024-07-25 02:34:56.077448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.336 [2024-07-25 02:34:56.077451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.336 [2024-07-25 02:34:56.077457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.336 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.602 [2024-07-25 02:34:56.266203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.602 BaseBdev1 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:09.602 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.862 [ 00:10:09.862 { 00:10:09.862 "name": "BaseBdev1", 00:10:09.862 "aliases": [ 00:10:09.862 "7ae68875-4a2e-11ef-9c8e-7947904e2597" 00:10:09.862 ], 00:10:09.862 "product_name": "Malloc disk", 00:10:09.862 "block_size": 512, 00:10:09.862 "num_blocks": 65536, 00:10:09.862 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:09.862 "assigned_rate_limits": { 00:10:09.862 "rw_ios_per_sec": 0, 00:10:09.862 "rw_mbytes_per_sec": 0, 00:10:09.862 "r_mbytes_per_sec": 0, 00:10:09.862 "w_mbytes_per_sec": 0 00:10:09.862 }, 00:10:09.862 "claimed": true, 00:10:09.862 "claim_type": "exclusive_write", 00:10:09.862 "zoned": false, 00:10:09.862 "supported_io_types": { 00:10:09.862 "read": true, 00:10:09.862 "write": true, 00:10:09.862 "unmap": true, 00:10:09.862 "flush": true, 00:10:09.862 "reset": true, 00:10:09.862 "nvme_admin": false, 00:10:09.862 "nvme_io": false, 00:10:09.862 "nvme_io_md": false, 00:10:09.862 "write_zeroes": true, 00:10:09.862 "zcopy": true, 00:10:09.862 "get_zone_info": false, 00:10:09.862 "zone_management": false, 00:10:09.862 "zone_append": false, 00:10:09.862 "compare": false, 00:10:09.862 "compare_and_write": false, 00:10:09.862 "abort": true, 00:10:09.862 "seek_hole": false, 00:10:09.862 "seek_data": false, 00:10:09.862 "copy": true, 00:10:09.862 "nvme_iov_md": false 00:10:09.862 }, 00:10:09.862 "memory_domains": [ 00:10:09.862 { 00:10:09.862 "dma_device_id": "system", 00:10:09.862 "dma_device_type": 1 00:10:09.862 }, 00:10:09.862 { 00:10:09.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.862 "dma_device_type": 2 00:10:09.862 } 00:10:09.862 ], 00:10:09.862 "driver_specific": {} 00:10:09.862 } 00:10:09.862 ] 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.862 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.122 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.122 "name": "Existed_Raid", 00:10:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.122 "strip_size_kb": 0, 00:10:10.122 "state": "configuring", 00:10:10.122 "raid_level": "raid1", 00:10:10.122 "superblock": false, 00:10:10.122 "num_base_bdevs": 3, 00:10:10.122 "num_base_bdevs_discovered": 1, 00:10:10.122 "num_base_bdevs_operational": 3, 00:10:10.122 "base_bdevs_list": [ 00:10:10.122 { 00:10:10.122 "name": "BaseBdev1", 00:10:10.122 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:10.122 "is_configured": true, 00:10:10.122 "data_offset": 0, 00:10:10.122 "data_size": 65536 00:10:10.122 }, 00:10:10.122 { 00:10:10.122 "name": "BaseBdev2", 00:10:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.122 "is_configured": false, 00:10:10.122 "data_offset": 0, 00:10:10.122 "data_size": 0 00:10:10.122 }, 00:10:10.122 { 00:10:10.122 "name": "BaseBdev3", 00:10:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.122 "is_configured": false, 00:10:10.122 "data_offset": 0, 00:10:10.122 "data_size": 0 00:10:10.122 } 00:10:10.122 ] 00:10:10.122 }' 00:10:10.122 02:34:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.122 02:34:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.382 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:10.382 [2024-07-25 02:34:57.241495] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.382 [2024-07-25 02:34:57.241513] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b97f434500 name Existed_Raid, state configuring 00:10:10.382 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:10.641 [2024-07-25 02:34:57.421517] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.641 [2024-07-25 02:34:57.422198] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.641 [2024-07-25 02:34:57.422229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.641 [2024-07-25 02:34:57.422232] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.641 [2024-07-25 02:34:57.422238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.641 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.900 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.900 "name": "Existed_Raid", 00:10:10.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.900 "strip_size_kb": 0, 00:10:10.900 "state": "configuring", 00:10:10.900 "raid_level": "raid1", 00:10:10.900 "superblock": false, 00:10:10.900 "num_base_bdevs": 3, 00:10:10.900 "num_base_bdevs_discovered": 1, 00:10:10.900 "num_base_bdevs_operational": 3, 00:10:10.900 "base_bdevs_list": [ 00:10:10.900 { 00:10:10.900 "name": "BaseBdev1", 00:10:10.900 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:10.900 "is_configured": true, 00:10:10.900 "data_offset": 0, 00:10:10.900 "data_size": 65536 00:10:10.900 }, 00:10:10.900 { 00:10:10.900 "name": "BaseBdev2", 00:10:10.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.900 "is_configured": false, 00:10:10.900 "data_offset": 0, 00:10:10.900 "data_size": 0 00:10:10.900 }, 00:10:10.900 { 00:10:10.900 "name": "BaseBdev3", 00:10:10.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.900 "is_configured": false, 00:10:10.900 "data_offset": 0, 00:10:10.900 "data_size": 0 00:10:10.900 } 00:10:10.900 ] 00:10:10.900 }' 00:10:10.900 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.900 02:34:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.159 02:34:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.159 [2024-07-25 02:34:58.029673] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.159 BaseBdev2 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:11.159 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:11.418 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.677 [ 00:10:11.677 { 00:10:11.677 "name": "BaseBdev2", 00:10:11.677 "aliases": [ 00:10:11.677 "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597" 00:10:11.677 ], 00:10:11.677 "product_name": "Malloc disk", 00:10:11.677 "block_size": 512, 00:10:11.677 "num_blocks": 65536, 00:10:11.677 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:11.677 "assigned_rate_limits": { 00:10:11.677 "rw_ios_per_sec": 0, 00:10:11.677 "rw_mbytes_per_sec": 0, 00:10:11.677 "r_mbytes_per_sec": 0, 00:10:11.677 "w_mbytes_per_sec": 0 00:10:11.677 }, 00:10:11.677 "claimed": true, 00:10:11.677 "claim_type": "exclusive_write", 00:10:11.677 "zoned": false, 00:10:11.677 "supported_io_types": { 00:10:11.677 "read": true, 00:10:11.677 "write": true, 00:10:11.677 "unmap": true, 00:10:11.677 "flush": true, 00:10:11.677 "reset": true, 00:10:11.677 "nvme_admin": false, 00:10:11.677 "nvme_io": false, 00:10:11.677 "nvme_io_md": false, 00:10:11.677 "write_zeroes": true, 00:10:11.677 "zcopy": true, 00:10:11.677 "get_zone_info": false, 00:10:11.677 "zone_management": false, 00:10:11.677 "zone_append": false, 00:10:11.677 "compare": false, 00:10:11.677 "compare_and_write": false, 00:10:11.677 "abort": true, 00:10:11.677 "seek_hole": false, 00:10:11.677 "seek_data": false, 00:10:11.677 "copy": true, 00:10:11.677 "nvme_iov_md": false 00:10:11.677 }, 00:10:11.677 "memory_domains": [ 00:10:11.677 { 00:10:11.677 "dma_device_id": "system", 00:10:11.677 "dma_device_type": 1 00:10:11.677 }, 00:10:11.677 { 00:10:11.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.677 "dma_device_type": 2 00:10:11.677 } 00:10:11.677 ], 00:10:11.677 "driver_specific": {} 00:10:11.677 } 00:10:11.677 ] 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.677 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.677 "name": "Existed_Raid", 00:10:11.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.677 "strip_size_kb": 0, 00:10:11.677 "state": "configuring", 00:10:11.678 "raid_level": "raid1", 00:10:11.678 "superblock": false, 00:10:11.678 "num_base_bdevs": 3, 00:10:11.678 "num_base_bdevs_discovered": 2, 00:10:11.678 "num_base_bdevs_operational": 3, 00:10:11.678 "base_bdevs_list": [ 00:10:11.678 { 00:10:11.678 "name": "BaseBdev1", 00:10:11.678 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:11.678 "is_configured": true, 00:10:11.678 "data_offset": 0, 00:10:11.678 "data_size": 65536 00:10:11.678 }, 00:10:11.678 { 00:10:11.678 "name": "BaseBdev2", 00:10:11.678 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:11.678 "is_configured": true, 00:10:11.678 "data_offset": 0, 00:10:11.678 "data_size": 65536 00:10:11.678 }, 00:10:11.678 { 00:10:11.678 "name": "BaseBdev3", 00:10:11.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.678 "is_configured": false, 00:10:11.678 "data_offset": 0, 00:10:11.678 "data_size": 0 00:10:11.678 } 00:10:11.678 ] 00:10:11.678 }' 00:10:11.678 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.678 02:34:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.246 02:34:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.246 [2024-07-25 02:34:59.017711] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.246 [2024-07-25 02:34:59.017728] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b97f434a00 00:10:12.246 [2024-07-25 02:34:59.017731] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:12.246 [2024-07-25 02:34:59.017747] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b97f497e20 00:10:12.246 [2024-07-25 02:34:59.017836] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b97f434a00 00:10:12.246 [2024-07-25 02:34:59.017839] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10b97f434a00 00:10:12.246 [2024-07-25 02:34:59.017862] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.246 BaseBdev3 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:12.246 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:12.505 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.505 [ 00:10:12.505 { 00:10:12.505 "name": "BaseBdev3", 00:10:12.505 "aliases": [ 00:10:12.505 "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597" 00:10:12.505 ], 00:10:12.505 "product_name": "Malloc disk", 00:10:12.505 "block_size": 512, 00:10:12.505 "num_blocks": 65536, 00:10:12.505 "uuid": "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597", 00:10:12.505 "assigned_rate_limits": { 00:10:12.505 "rw_ios_per_sec": 0, 00:10:12.505 "rw_mbytes_per_sec": 0, 00:10:12.505 "r_mbytes_per_sec": 0, 00:10:12.505 "w_mbytes_per_sec": 0 00:10:12.505 }, 00:10:12.505 "claimed": true, 00:10:12.505 "claim_type": "exclusive_write", 00:10:12.505 "zoned": false, 00:10:12.505 "supported_io_types": { 00:10:12.505 "read": true, 00:10:12.505 "write": true, 00:10:12.505 "unmap": true, 00:10:12.505 "flush": true, 00:10:12.505 "reset": true, 00:10:12.505 "nvme_admin": false, 00:10:12.505 "nvme_io": false, 00:10:12.505 "nvme_io_md": false, 00:10:12.505 "write_zeroes": true, 00:10:12.505 "zcopy": true, 00:10:12.505 "get_zone_info": false, 00:10:12.505 "zone_management": false, 00:10:12.505 "zone_append": false, 00:10:12.505 "compare": false, 00:10:12.505 "compare_and_write": false, 00:10:12.505 "abort": true, 00:10:12.505 "seek_hole": false, 00:10:12.505 "seek_data": false, 00:10:12.505 "copy": true, 00:10:12.505 "nvme_iov_md": false 00:10:12.505 }, 00:10:12.505 "memory_domains": [ 00:10:12.505 { 00:10:12.505 "dma_device_id": "system", 00:10:12.505 "dma_device_type": 1 00:10:12.505 }, 00:10:12.505 { 00:10:12.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.506 "dma_device_type": 2 00:10:12.506 } 00:10:12.506 ], 00:10:12.506 "driver_specific": {} 00:10:12.506 } 00:10:12.506 ] 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.506 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.765 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.765 "name": "Existed_Raid", 00:10:12.765 "uuid": "7c8a8049-4a2e-11ef-9c8e-7947904e2597", 00:10:12.765 "strip_size_kb": 0, 00:10:12.765 "state": "online", 00:10:12.765 "raid_level": "raid1", 00:10:12.765 "superblock": false, 00:10:12.765 "num_base_bdevs": 3, 00:10:12.765 "num_base_bdevs_discovered": 3, 00:10:12.765 "num_base_bdevs_operational": 3, 00:10:12.765 "base_bdevs_list": [ 00:10:12.765 { 00:10:12.765 "name": "BaseBdev1", 00:10:12.765 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:12.765 "is_configured": true, 00:10:12.765 "data_offset": 0, 00:10:12.765 "data_size": 65536 00:10:12.765 }, 00:10:12.765 { 00:10:12.765 "name": "BaseBdev2", 00:10:12.765 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:12.765 "is_configured": true, 00:10:12.765 "data_offset": 0, 00:10:12.765 "data_size": 65536 00:10:12.765 }, 00:10:12.765 { 00:10:12.765 "name": "BaseBdev3", 00:10:12.765 "uuid": "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597", 00:10:12.765 "is_configured": true, 00:10:12.765 "data_offset": 0, 00:10:12.765 "data_size": 65536 00:10:12.765 } 00:10:12.765 ] 00:10:12.765 }' 00:10:12.765 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.765 02:34:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:13.024 02:34:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:13.283 [2024-07-25 02:35:00.001737] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:13.283 "name": "Existed_Raid", 00:10:13.283 "aliases": [ 00:10:13.283 "7c8a8049-4a2e-11ef-9c8e-7947904e2597" 00:10:13.283 ], 00:10:13.283 "product_name": "Raid Volume", 00:10:13.283 "block_size": 512, 00:10:13.283 "num_blocks": 65536, 00:10:13.283 "uuid": "7c8a8049-4a2e-11ef-9c8e-7947904e2597", 00:10:13.283 "assigned_rate_limits": { 00:10:13.283 "rw_ios_per_sec": 0, 00:10:13.283 "rw_mbytes_per_sec": 0, 00:10:13.283 "r_mbytes_per_sec": 0, 00:10:13.283 "w_mbytes_per_sec": 0 00:10:13.283 }, 00:10:13.283 "claimed": false, 00:10:13.283 "zoned": false, 00:10:13.283 "supported_io_types": { 00:10:13.283 "read": true, 00:10:13.283 "write": true, 00:10:13.283 "unmap": false, 00:10:13.283 "flush": false, 00:10:13.283 "reset": true, 00:10:13.283 "nvme_admin": false, 00:10:13.283 "nvme_io": false, 00:10:13.283 "nvme_io_md": false, 00:10:13.283 "write_zeroes": true, 00:10:13.283 "zcopy": false, 00:10:13.283 "get_zone_info": false, 00:10:13.283 "zone_management": false, 00:10:13.283 "zone_append": false, 00:10:13.283 "compare": false, 00:10:13.283 "compare_and_write": false, 00:10:13.283 "abort": false, 00:10:13.283 "seek_hole": false, 00:10:13.283 "seek_data": false, 00:10:13.283 "copy": false, 00:10:13.283 "nvme_iov_md": false 00:10:13.283 }, 00:10:13.283 "memory_domains": [ 00:10:13.283 { 00:10:13.283 "dma_device_id": "system", 00:10:13.283 "dma_device_type": 1 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.283 "dma_device_type": 2 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "dma_device_id": "system", 00:10:13.283 "dma_device_type": 1 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.283 "dma_device_type": 2 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "dma_device_id": "system", 00:10:13.283 "dma_device_type": 1 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.283 "dma_device_type": 2 00:10:13.283 } 00:10:13.283 ], 00:10:13.283 "driver_specific": { 00:10:13.283 "raid": { 00:10:13.283 "uuid": "7c8a8049-4a2e-11ef-9c8e-7947904e2597", 00:10:13.283 "strip_size_kb": 0, 00:10:13.283 "state": "online", 00:10:13.283 "raid_level": "raid1", 00:10:13.283 "superblock": false, 00:10:13.283 "num_base_bdevs": 3, 00:10:13.283 "num_base_bdevs_discovered": 3, 00:10:13.283 "num_base_bdevs_operational": 3, 00:10:13.283 "base_bdevs_list": [ 00:10:13.283 { 00:10:13.283 "name": "BaseBdev1", 00:10:13.283 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:13.283 "is_configured": true, 00:10:13.283 "data_offset": 0, 00:10:13.283 "data_size": 65536 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "name": "BaseBdev2", 00:10:13.283 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:13.283 "is_configured": true, 00:10:13.283 "data_offset": 0, 00:10:13.283 "data_size": 65536 00:10:13.283 }, 00:10:13.283 { 00:10:13.283 "name": "BaseBdev3", 00:10:13.283 "uuid": "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597", 00:10:13.283 "is_configured": true, 00:10:13.283 "data_offset": 0, 00:10:13.283 "data_size": 65536 00:10:13.283 } 00:10:13.283 ] 00:10:13.283 } 00:10:13.283 } 00:10:13.283 }' 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:13.283 BaseBdev2 00:10:13.283 BaseBdev3' 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:13.283 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.542 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.542 "name": "BaseBdev1", 00:10:13.542 "aliases": [ 00:10:13.542 "7ae68875-4a2e-11ef-9c8e-7947904e2597" 00:10:13.542 ], 00:10:13.542 "product_name": "Malloc disk", 00:10:13.542 "block_size": 512, 00:10:13.542 "num_blocks": 65536, 00:10:13.542 "uuid": "7ae68875-4a2e-11ef-9c8e-7947904e2597", 00:10:13.542 "assigned_rate_limits": { 00:10:13.542 "rw_ios_per_sec": 0, 00:10:13.542 "rw_mbytes_per_sec": 0, 00:10:13.542 "r_mbytes_per_sec": 0, 00:10:13.542 "w_mbytes_per_sec": 0 00:10:13.542 }, 00:10:13.542 "claimed": true, 00:10:13.542 "claim_type": "exclusive_write", 00:10:13.542 "zoned": false, 00:10:13.542 "supported_io_types": { 00:10:13.542 "read": true, 00:10:13.542 "write": true, 00:10:13.542 "unmap": true, 00:10:13.542 "flush": true, 00:10:13.542 "reset": true, 00:10:13.542 "nvme_admin": false, 00:10:13.542 "nvme_io": false, 00:10:13.542 "nvme_io_md": false, 00:10:13.542 "write_zeroes": true, 00:10:13.542 "zcopy": true, 00:10:13.542 "get_zone_info": false, 00:10:13.542 "zone_management": false, 00:10:13.542 "zone_append": false, 00:10:13.542 "compare": false, 00:10:13.542 "compare_and_write": false, 00:10:13.542 "abort": true, 00:10:13.542 "seek_hole": false, 00:10:13.542 "seek_data": false, 00:10:13.542 "copy": true, 00:10:13.542 "nvme_iov_md": false 00:10:13.542 }, 00:10:13.542 "memory_domains": [ 00:10:13.542 { 00:10:13.543 "dma_device_id": "system", 00:10:13.543 "dma_device_type": 1 00:10:13.543 }, 00:10:13.543 { 00:10:13.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.543 "dma_device_type": 2 00:10:13.543 } 00:10:13.543 ], 00:10:13.543 "driver_specific": {} 00:10:13.543 }' 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:13.543 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.803 "name": "BaseBdev2", 00:10:13.803 "aliases": [ 00:10:13.803 "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597" 00:10:13.803 ], 00:10:13.803 "product_name": "Malloc disk", 00:10:13.803 "block_size": 512, 00:10:13.803 "num_blocks": 65536, 00:10:13.803 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:13.803 "assigned_rate_limits": { 00:10:13.803 "rw_ios_per_sec": 0, 00:10:13.803 "rw_mbytes_per_sec": 0, 00:10:13.803 "r_mbytes_per_sec": 0, 00:10:13.803 "w_mbytes_per_sec": 0 00:10:13.803 }, 00:10:13.803 "claimed": true, 00:10:13.803 "claim_type": "exclusive_write", 00:10:13.803 "zoned": false, 00:10:13.803 "supported_io_types": { 00:10:13.803 "read": true, 00:10:13.803 "write": true, 00:10:13.803 "unmap": true, 00:10:13.803 "flush": true, 00:10:13.803 "reset": true, 00:10:13.803 "nvme_admin": false, 00:10:13.803 "nvme_io": false, 00:10:13.803 "nvme_io_md": false, 00:10:13.803 "write_zeroes": true, 00:10:13.803 "zcopy": true, 00:10:13.803 "get_zone_info": false, 00:10:13.803 "zone_management": false, 00:10:13.803 "zone_append": false, 00:10:13.803 "compare": false, 00:10:13.803 "compare_and_write": false, 00:10:13.803 "abort": true, 00:10:13.803 "seek_hole": false, 00:10:13.803 "seek_data": false, 00:10:13.803 "copy": true, 00:10:13.803 "nvme_iov_md": false 00:10:13.803 }, 00:10:13.803 "memory_domains": [ 00:10:13.803 { 00:10:13.803 "dma_device_id": "system", 00:10:13.803 "dma_device_type": 1 00:10:13.803 }, 00:10:13.803 { 00:10:13.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.803 "dma_device_type": 2 00:10:13.803 } 00:10:13.803 ], 00:10:13.803 "driver_specific": {} 00:10:13.803 }' 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:13.803 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:14.063 "name": "BaseBdev3", 00:10:14.063 "aliases": [ 00:10:14.063 "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597" 00:10:14.063 ], 00:10:14.063 "product_name": "Malloc disk", 00:10:14.063 "block_size": 512, 00:10:14.063 "num_blocks": 65536, 00:10:14.063 "uuid": "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597", 00:10:14.063 "assigned_rate_limits": { 00:10:14.063 "rw_ios_per_sec": 0, 00:10:14.063 "rw_mbytes_per_sec": 0, 00:10:14.063 "r_mbytes_per_sec": 0, 00:10:14.063 "w_mbytes_per_sec": 0 00:10:14.063 }, 00:10:14.063 "claimed": true, 00:10:14.063 "claim_type": "exclusive_write", 00:10:14.063 "zoned": false, 00:10:14.063 "supported_io_types": { 00:10:14.063 "read": true, 00:10:14.063 "write": true, 00:10:14.063 "unmap": true, 00:10:14.063 "flush": true, 00:10:14.063 "reset": true, 00:10:14.063 "nvme_admin": false, 00:10:14.063 "nvme_io": false, 00:10:14.063 "nvme_io_md": false, 00:10:14.063 "write_zeroes": true, 00:10:14.063 "zcopy": true, 00:10:14.063 "get_zone_info": false, 00:10:14.063 "zone_management": false, 00:10:14.063 "zone_append": false, 00:10:14.063 "compare": false, 00:10:14.063 "compare_and_write": false, 00:10:14.063 "abort": true, 00:10:14.063 "seek_hole": false, 00:10:14.063 "seek_data": false, 00:10:14.063 "copy": true, 00:10:14.063 "nvme_iov_md": false 00:10:14.063 }, 00:10:14.063 "memory_domains": [ 00:10:14.063 { 00:10:14.063 "dma_device_id": "system", 00:10:14.063 "dma_device_type": 1 00:10:14.063 }, 00:10:14.063 { 00:10:14.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.063 "dma_device_type": 2 00:10:14.063 } 00:10:14.063 ], 00:10:14.063 "driver_specific": {} 00:10:14.063 }' 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.063 02:35:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:14.323 [2024-07-25 02:35:00.993800] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.323 "name": "Existed_Raid", 00:10:14.323 "uuid": "7c8a8049-4a2e-11ef-9c8e-7947904e2597", 00:10:14.323 "strip_size_kb": 0, 00:10:14.323 "state": "online", 00:10:14.323 "raid_level": "raid1", 00:10:14.323 "superblock": false, 00:10:14.323 "num_base_bdevs": 3, 00:10:14.323 "num_base_bdevs_discovered": 2, 00:10:14.323 "num_base_bdevs_operational": 2, 00:10:14.323 "base_bdevs_list": [ 00:10:14.323 { 00:10:14.323 "name": null, 00:10:14.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.323 "is_configured": false, 00:10:14.323 "data_offset": 0, 00:10:14.323 "data_size": 65536 00:10:14.323 }, 00:10:14.323 { 00:10:14.323 "name": "BaseBdev2", 00:10:14.323 "uuid": "7bf3b7fd-4a2e-11ef-9c8e-7947904e2597", 00:10:14.323 "is_configured": true, 00:10:14.323 "data_offset": 0, 00:10:14.323 "data_size": 65536 00:10:14.323 }, 00:10:14.323 { 00:10:14.323 "name": "BaseBdev3", 00:10:14.323 "uuid": "7c8a7c8e-4a2e-11ef-9c8e-7947904e2597", 00:10:14.323 "is_configured": true, 00:10:14.323 "data_offset": 0, 00:10:14.323 "data_size": 65536 00:10:14.323 } 00:10:14.323 ] 00:10:14.323 }' 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.323 02:35:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:14.583 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:14.583 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.583 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:14.843 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:14.843 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:14.843 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:15.102 [2024-07-25 02:35:01.810516] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.102 02:35:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:15.362 [2024-07-25 02:35:02.155204] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.362 [2024-07-25 02:35:02.155226] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.362 [2024-07-25 02:35:02.159963] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.362 [2024-07-25 02:35:02.159978] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.362 [2024-07-25 02:35:02.159981] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b97f434a00 name Existed_Raid, state offline 00:10:15.362 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:15.362 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:15.362 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.362 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.621 BaseBdev2 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:15.621 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:15.622 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:15.622 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:15.882 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.141 [ 00:10:16.141 { 00:10:16.141 "name": "BaseBdev2", 00:10:16.141 "aliases": [ 00:10:16.141 "7ea049c5-4a2e-11ef-9c8e-7947904e2597" 00:10:16.141 ], 00:10:16.141 "product_name": "Malloc disk", 00:10:16.141 "block_size": 512, 00:10:16.141 "num_blocks": 65536, 00:10:16.141 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:16.141 "assigned_rate_limits": { 00:10:16.141 "rw_ios_per_sec": 0, 00:10:16.141 "rw_mbytes_per_sec": 0, 00:10:16.141 "r_mbytes_per_sec": 0, 00:10:16.141 "w_mbytes_per_sec": 0 00:10:16.141 }, 00:10:16.141 "claimed": false, 00:10:16.141 "zoned": false, 00:10:16.141 "supported_io_types": { 00:10:16.141 "read": true, 00:10:16.141 "write": true, 00:10:16.141 "unmap": true, 00:10:16.141 "flush": true, 00:10:16.141 "reset": true, 00:10:16.141 "nvme_admin": false, 00:10:16.141 "nvme_io": false, 00:10:16.141 "nvme_io_md": false, 00:10:16.141 "write_zeroes": true, 00:10:16.141 "zcopy": true, 00:10:16.141 "get_zone_info": false, 00:10:16.141 "zone_management": false, 00:10:16.141 "zone_append": false, 00:10:16.141 "compare": false, 00:10:16.141 "compare_and_write": false, 00:10:16.141 "abort": true, 00:10:16.141 "seek_hole": false, 00:10:16.142 "seek_data": false, 00:10:16.142 "copy": true, 00:10:16.142 "nvme_iov_md": false 00:10:16.142 }, 00:10:16.142 "memory_domains": [ 00:10:16.142 { 00:10:16.142 "dma_device_id": "system", 00:10:16.142 "dma_device_type": 1 00:10:16.142 }, 00:10:16.142 { 00:10:16.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.142 "dma_device_type": 2 00:10:16.142 } 00:10:16.142 ], 00:10:16.142 "driver_specific": {} 00:10:16.142 } 00:10:16.142 ] 00:10:16.142 02:35:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:16.142 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:16.142 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:16.142 02:35:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.401 BaseBdev3 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:16.401 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.661 [ 00:10:16.661 { 00:10:16.661 "name": "BaseBdev3", 00:10:16.661 "aliases": [ 00:10:16.661 "7ef3e975-4a2e-11ef-9c8e-7947904e2597" 00:10:16.661 ], 00:10:16.661 "product_name": "Malloc disk", 00:10:16.661 "block_size": 512, 00:10:16.661 "num_blocks": 65536, 00:10:16.661 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:16.661 "assigned_rate_limits": { 00:10:16.661 "rw_ios_per_sec": 0, 00:10:16.661 "rw_mbytes_per_sec": 0, 00:10:16.661 "r_mbytes_per_sec": 0, 00:10:16.661 "w_mbytes_per_sec": 0 00:10:16.661 }, 00:10:16.661 "claimed": false, 00:10:16.661 "zoned": false, 00:10:16.661 "supported_io_types": { 00:10:16.661 "read": true, 00:10:16.661 "write": true, 00:10:16.661 "unmap": true, 00:10:16.661 "flush": true, 00:10:16.661 "reset": true, 00:10:16.661 "nvme_admin": false, 00:10:16.661 "nvme_io": false, 00:10:16.661 "nvme_io_md": false, 00:10:16.661 "write_zeroes": true, 00:10:16.661 "zcopy": true, 00:10:16.661 "get_zone_info": false, 00:10:16.661 "zone_management": false, 00:10:16.661 "zone_append": false, 00:10:16.661 "compare": false, 00:10:16.661 "compare_and_write": false, 00:10:16.661 "abort": true, 00:10:16.661 "seek_hole": false, 00:10:16.661 "seek_data": false, 00:10:16.661 "copy": true, 00:10:16.661 "nvme_iov_md": false 00:10:16.661 }, 00:10:16.661 "memory_domains": [ 00:10:16.661 { 00:10:16.661 "dma_device_id": "system", 00:10:16.661 "dma_device_type": 1 00:10:16.661 }, 00:10:16.661 { 00:10:16.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.661 "dma_device_type": 2 00:10:16.661 } 00:10:16.661 ], 00:10:16.661 "driver_specific": {} 00:10:16.661 } 00:10:16.661 ] 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:16.661 [2024-07-25 02:35:03.556038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.661 [2024-07-25 02:35:03.556074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.661 [2024-07-25 02:35:03.556080] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.661 [2024-07-25 02:35:03.556548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.661 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.921 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.921 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.921 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.921 "name": "Existed_Raid", 00:10:16.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.921 "strip_size_kb": 0, 00:10:16.921 "state": "configuring", 00:10:16.921 "raid_level": "raid1", 00:10:16.921 "superblock": false, 00:10:16.921 "num_base_bdevs": 3, 00:10:16.921 "num_base_bdevs_discovered": 2, 00:10:16.921 "num_base_bdevs_operational": 3, 00:10:16.921 "base_bdevs_list": [ 00:10:16.921 { 00:10:16.921 "name": "BaseBdev1", 00:10:16.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.921 "is_configured": false, 00:10:16.921 "data_offset": 0, 00:10:16.921 "data_size": 0 00:10:16.921 }, 00:10:16.921 { 00:10:16.921 "name": "BaseBdev2", 00:10:16.921 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:16.921 "is_configured": true, 00:10:16.921 "data_offset": 0, 00:10:16.921 "data_size": 65536 00:10:16.921 }, 00:10:16.921 { 00:10:16.921 "name": "BaseBdev3", 00:10:16.921 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:16.921 "is_configured": true, 00:10:16.921 "data_offset": 0, 00:10:16.921 "data_size": 65536 00:10:16.921 } 00:10:16.921 ] 00:10:16.921 }' 00:10:16.921 02:35:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.921 02:35:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.179 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:17.438 [2024-07-25 02:35:04.196083] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.438 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.696 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.696 "name": "Existed_Raid", 00:10:17.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.696 "strip_size_kb": 0, 00:10:17.696 "state": "configuring", 00:10:17.696 "raid_level": "raid1", 00:10:17.696 "superblock": false, 00:10:17.696 "num_base_bdevs": 3, 00:10:17.696 "num_base_bdevs_discovered": 1, 00:10:17.696 "num_base_bdevs_operational": 3, 00:10:17.696 "base_bdevs_list": [ 00:10:17.696 { 00:10:17.696 "name": "BaseBdev1", 00:10:17.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.696 "is_configured": false, 00:10:17.696 "data_offset": 0, 00:10:17.696 "data_size": 0 00:10:17.696 }, 00:10:17.696 { 00:10:17.696 "name": null, 00:10:17.696 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:17.696 "is_configured": false, 00:10:17.696 "data_offset": 0, 00:10:17.696 "data_size": 65536 00:10:17.696 }, 00:10:17.696 { 00:10:17.696 "name": "BaseBdev3", 00:10:17.696 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:17.696 "is_configured": true, 00:10:17.696 "data_offset": 0, 00:10:17.696 "data_size": 65536 00:10:17.696 } 00:10:17.696 ] 00:10:17.696 }' 00:10:17.696 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.696 02:35:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.955 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.955 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:17.955 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:17.955 02:35:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.222 [2024-07-25 02:35:05.016254] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.222 BaseBdev1 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:18.222 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:18.491 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.491 [ 00:10:18.491 { 00:10:18.491 "name": "BaseBdev1", 00:10:18.491 "aliases": [ 00:10:18.491 "801dc9dd-4a2e-11ef-9c8e-7947904e2597" 00:10:18.491 ], 00:10:18.491 "product_name": "Malloc disk", 00:10:18.491 "block_size": 512, 00:10:18.491 "num_blocks": 65536, 00:10:18.491 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:18.491 "assigned_rate_limits": { 00:10:18.491 "rw_ios_per_sec": 0, 00:10:18.491 "rw_mbytes_per_sec": 0, 00:10:18.491 "r_mbytes_per_sec": 0, 00:10:18.491 "w_mbytes_per_sec": 0 00:10:18.491 }, 00:10:18.491 "claimed": true, 00:10:18.491 "claim_type": "exclusive_write", 00:10:18.491 "zoned": false, 00:10:18.491 "supported_io_types": { 00:10:18.491 "read": true, 00:10:18.491 "write": true, 00:10:18.491 "unmap": true, 00:10:18.491 "flush": true, 00:10:18.491 "reset": true, 00:10:18.491 "nvme_admin": false, 00:10:18.491 "nvme_io": false, 00:10:18.491 "nvme_io_md": false, 00:10:18.491 "write_zeroes": true, 00:10:18.491 "zcopy": true, 00:10:18.491 "get_zone_info": false, 00:10:18.491 "zone_management": false, 00:10:18.491 "zone_append": false, 00:10:18.491 "compare": false, 00:10:18.491 "compare_and_write": false, 00:10:18.491 "abort": true, 00:10:18.491 "seek_hole": false, 00:10:18.491 "seek_data": false, 00:10:18.491 "copy": true, 00:10:18.491 "nvme_iov_md": false 00:10:18.491 }, 00:10:18.491 "memory_domains": [ 00:10:18.491 { 00:10:18.491 "dma_device_id": "system", 00:10:18.491 "dma_device_type": 1 00:10:18.491 }, 00:10:18.491 { 00:10:18.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.491 "dma_device_type": 2 00:10:18.491 } 00:10:18.492 ], 00:10:18.492 "driver_specific": {} 00:10:18.492 } 00:10:18.492 ] 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.492 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.751 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.751 "name": "Existed_Raid", 00:10:18.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.751 "strip_size_kb": 0, 00:10:18.751 "state": "configuring", 00:10:18.751 "raid_level": "raid1", 00:10:18.751 "superblock": false, 00:10:18.751 "num_base_bdevs": 3, 00:10:18.751 "num_base_bdevs_discovered": 2, 00:10:18.751 "num_base_bdevs_operational": 3, 00:10:18.751 "base_bdevs_list": [ 00:10:18.751 { 00:10:18.751 "name": "BaseBdev1", 00:10:18.751 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:18.751 "is_configured": true, 00:10:18.751 "data_offset": 0, 00:10:18.751 "data_size": 65536 00:10:18.751 }, 00:10:18.751 { 00:10:18.751 "name": null, 00:10:18.751 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:18.751 "is_configured": false, 00:10:18.751 "data_offset": 0, 00:10:18.751 "data_size": 65536 00:10:18.751 }, 00:10:18.751 { 00:10:18.751 "name": "BaseBdev3", 00:10:18.751 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:18.751 "is_configured": true, 00:10:18.751 "data_offset": 0, 00:10:18.751 "data_size": 65536 00:10:18.751 } 00:10:18.751 ] 00:10:18.751 }' 00:10:18.751 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.751 02:35:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.011 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.011 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:19.270 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:19.270 02:35:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:19.270 [2024-07-25 02:35:06.156259] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.270 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.529 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.529 "name": "Existed_Raid", 00:10:19.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.529 "strip_size_kb": 0, 00:10:19.529 "state": "configuring", 00:10:19.529 "raid_level": "raid1", 00:10:19.529 "superblock": false, 00:10:19.529 "num_base_bdevs": 3, 00:10:19.529 "num_base_bdevs_discovered": 1, 00:10:19.529 "num_base_bdevs_operational": 3, 00:10:19.529 "base_bdevs_list": [ 00:10:19.529 { 00:10:19.529 "name": "BaseBdev1", 00:10:19.529 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:19.529 "is_configured": true, 00:10:19.529 "data_offset": 0, 00:10:19.529 "data_size": 65536 00:10:19.529 }, 00:10:19.529 { 00:10:19.529 "name": null, 00:10:19.529 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:19.529 "is_configured": false, 00:10:19.529 "data_offset": 0, 00:10:19.529 "data_size": 65536 00:10:19.529 }, 00:10:19.529 { 00:10:19.529 "name": null, 00:10:19.529 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:19.529 "is_configured": false, 00:10:19.529 "data_offset": 0, 00:10:19.529 "data_size": 65536 00:10:19.529 } 00:10:19.529 ] 00:10:19.529 }' 00:10:19.529 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.529 02:35:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.788 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.788 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.047 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:20.047 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.307 [2024-07-25 02:35:06.980335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.307 02:35:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.307 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.307 "name": "Existed_Raid", 00:10:20.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.307 "strip_size_kb": 0, 00:10:20.307 "state": "configuring", 00:10:20.307 "raid_level": "raid1", 00:10:20.307 "superblock": false, 00:10:20.307 "num_base_bdevs": 3, 00:10:20.307 "num_base_bdevs_discovered": 2, 00:10:20.307 "num_base_bdevs_operational": 3, 00:10:20.307 "base_bdevs_list": [ 00:10:20.307 { 00:10:20.307 "name": "BaseBdev1", 00:10:20.307 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:20.307 "is_configured": true, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 65536 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "name": null, 00:10:20.307 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:20.307 "is_configured": false, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 65536 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "name": "BaseBdev3", 00:10:20.307 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:20.307 "is_configured": true, 00:10:20.307 "data_offset": 0, 00:10:20.307 "data_size": 65536 00:10:20.307 } 00:10:20.307 ] 00:10:20.307 }' 00:10:20.307 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.307 02:35:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.566 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.566 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.825 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:20.825 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:21.084 [2024-07-25 02:35:07.808405] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.084 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.085 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.085 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.085 02:35:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.344 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.344 "name": "Existed_Raid", 00:10:21.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.344 "strip_size_kb": 0, 00:10:21.344 "state": "configuring", 00:10:21.344 "raid_level": "raid1", 00:10:21.344 "superblock": false, 00:10:21.344 "num_base_bdevs": 3, 00:10:21.344 "num_base_bdevs_discovered": 1, 00:10:21.344 "num_base_bdevs_operational": 3, 00:10:21.344 "base_bdevs_list": [ 00:10:21.344 { 00:10:21.344 "name": null, 00:10:21.344 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:21.344 "is_configured": false, 00:10:21.344 "data_offset": 0, 00:10:21.344 "data_size": 65536 00:10:21.344 }, 00:10:21.344 { 00:10:21.344 "name": null, 00:10:21.344 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:21.344 "is_configured": false, 00:10:21.344 "data_offset": 0, 00:10:21.344 "data_size": 65536 00:10:21.344 }, 00:10:21.344 { 00:10:21.344 "name": "BaseBdev3", 00:10:21.344 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:21.344 "is_configured": true, 00:10:21.344 "data_offset": 0, 00:10:21.344 "data_size": 65536 00:10:21.344 } 00:10:21.344 ] 00:10:21.344 }' 00:10:21.344 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.344 02:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.603 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.603 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.603 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:21.603 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.862 [2024-07-25 02:35:08.617103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.862 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.121 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.121 "name": "Existed_Raid", 00:10:22.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.121 "strip_size_kb": 0, 00:10:22.121 "state": "configuring", 00:10:22.121 "raid_level": "raid1", 00:10:22.121 "superblock": false, 00:10:22.121 "num_base_bdevs": 3, 00:10:22.121 "num_base_bdevs_discovered": 2, 00:10:22.121 "num_base_bdevs_operational": 3, 00:10:22.121 "base_bdevs_list": [ 00:10:22.121 { 00:10:22.121 "name": null, 00:10:22.121 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:22.121 "is_configured": false, 00:10:22.121 "data_offset": 0, 00:10:22.121 "data_size": 65536 00:10:22.121 }, 00:10:22.121 { 00:10:22.121 "name": "BaseBdev2", 00:10:22.121 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:22.121 "is_configured": true, 00:10:22.121 "data_offset": 0, 00:10:22.121 "data_size": 65536 00:10:22.121 }, 00:10:22.121 { 00:10:22.121 "name": "BaseBdev3", 00:10:22.121 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:22.121 "is_configured": true, 00:10:22.121 "data_offset": 0, 00:10:22.121 "data_size": 65536 00:10:22.121 } 00:10:22.121 ] 00:10:22.121 }' 00:10:22.121 02:35:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.121 02:35:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.380 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.380 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:22.380 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:22.380 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.380 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:22.639 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 801dc9dd-4a2e-11ef-9c8e-7947904e2597 00:10:22.899 [2024-07-25 02:35:09.577281] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:22.899 [2024-07-25 02:35:09.577297] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x10b97f434f00 00:10:22.899 [2024-07-25 02:35:09.577301] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:22.899 [2024-07-25 02:35:09.577317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b97f497e20 00:10:22.899 [2024-07-25 02:35:09.577379] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10b97f434f00 00:10:22.899 [2024-07-25 02:35:09.577382] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10b97f434f00 00:10:22.899 [2024-07-25 02:35:09.577407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.899 NewBaseBdev 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:22.899 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:23.159 [ 00:10:23.159 { 00:10:23.159 "name": "NewBaseBdev", 00:10:23.159 "aliases": [ 00:10:23.159 "801dc9dd-4a2e-11ef-9c8e-7947904e2597" 00:10:23.159 ], 00:10:23.159 "product_name": "Malloc disk", 00:10:23.159 "block_size": 512, 00:10:23.159 "num_blocks": 65536, 00:10:23.159 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:23.159 "assigned_rate_limits": { 00:10:23.159 "rw_ios_per_sec": 0, 00:10:23.159 "rw_mbytes_per_sec": 0, 00:10:23.159 "r_mbytes_per_sec": 0, 00:10:23.159 "w_mbytes_per_sec": 0 00:10:23.159 }, 00:10:23.159 "claimed": true, 00:10:23.159 "claim_type": "exclusive_write", 00:10:23.159 "zoned": false, 00:10:23.159 "supported_io_types": { 00:10:23.159 "read": true, 00:10:23.159 "write": true, 00:10:23.159 "unmap": true, 00:10:23.159 "flush": true, 00:10:23.159 "reset": true, 00:10:23.159 "nvme_admin": false, 00:10:23.159 "nvme_io": false, 00:10:23.159 "nvme_io_md": false, 00:10:23.159 "write_zeroes": true, 00:10:23.159 "zcopy": true, 00:10:23.160 "get_zone_info": false, 00:10:23.160 "zone_management": false, 00:10:23.160 "zone_append": false, 00:10:23.160 "compare": false, 00:10:23.160 "compare_and_write": false, 00:10:23.160 "abort": true, 00:10:23.160 "seek_hole": false, 00:10:23.160 "seek_data": false, 00:10:23.160 "copy": true, 00:10:23.160 "nvme_iov_md": false 00:10:23.160 }, 00:10:23.160 "memory_domains": [ 00:10:23.160 { 00:10:23.160 "dma_device_id": "system", 00:10:23.160 "dma_device_type": 1 00:10:23.160 }, 00:10:23.160 { 00:10:23.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.160 "dma_device_type": 2 00:10:23.160 } 00:10:23.160 ], 00:10:23.160 "driver_specific": {} 00:10:23.160 } 00:10:23.160 ] 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.160 02:35:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.420 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.420 "name": "Existed_Raid", 00:10:23.420 "uuid": "82d5c37c-4a2e-11ef-9c8e-7947904e2597", 00:10:23.420 "strip_size_kb": 0, 00:10:23.420 "state": "online", 00:10:23.420 "raid_level": "raid1", 00:10:23.420 "superblock": false, 00:10:23.420 "num_base_bdevs": 3, 00:10:23.420 "num_base_bdevs_discovered": 3, 00:10:23.420 "num_base_bdevs_operational": 3, 00:10:23.420 "base_bdevs_list": [ 00:10:23.420 { 00:10:23.420 "name": "NewBaseBdev", 00:10:23.420 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:23.420 "is_configured": true, 00:10:23.420 "data_offset": 0, 00:10:23.420 "data_size": 65536 00:10:23.420 }, 00:10:23.420 { 00:10:23.420 "name": "BaseBdev2", 00:10:23.420 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:23.420 "is_configured": true, 00:10:23.420 "data_offset": 0, 00:10:23.420 "data_size": 65536 00:10:23.420 }, 00:10:23.420 { 00:10:23.420 "name": "BaseBdev3", 00:10:23.420 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:23.420 "is_configured": true, 00:10:23.420 "data_offset": 0, 00:10:23.420 "data_size": 65536 00:10:23.420 } 00:10:23.420 ] 00:10:23.420 }' 00:10:23.420 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.420 02:35:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:23.680 [2024-07-25 02:35:10.565283] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:23.680 "name": "Existed_Raid", 00:10:23.680 "aliases": [ 00:10:23.680 "82d5c37c-4a2e-11ef-9c8e-7947904e2597" 00:10:23.680 ], 00:10:23.680 "product_name": "Raid Volume", 00:10:23.680 "block_size": 512, 00:10:23.680 "num_blocks": 65536, 00:10:23.680 "uuid": "82d5c37c-4a2e-11ef-9c8e-7947904e2597", 00:10:23.680 "assigned_rate_limits": { 00:10:23.680 "rw_ios_per_sec": 0, 00:10:23.680 "rw_mbytes_per_sec": 0, 00:10:23.680 "r_mbytes_per_sec": 0, 00:10:23.680 "w_mbytes_per_sec": 0 00:10:23.680 }, 00:10:23.680 "claimed": false, 00:10:23.680 "zoned": false, 00:10:23.680 "supported_io_types": { 00:10:23.680 "read": true, 00:10:23.680 "write": true, 00:10:23.680 "unmap": false, 00:10:23.680 "flush": false, 00:10:23.680 "reset": true, 00:10:23.680 "nvme_admin": false, 00:10:23.680 "nvme_io": false, 00:10:23.680 "nvme_io_md": false, 00:10:23.680 "write_zeroes": true, 00:10:23.680 "zcopy": false, 00:10:23.680 "get_zone_info": false, 00:10:23.680 "zone_management": false, 00:10:23.680 "zone_append": false, 00:10:23.680 "compare": false, 00:10:23.680 "compare_and_write": false, 00:10:23.680 "abort": false, 00:10:23.680 "seek_hole": false, 00:10:23.680 "seek_data": false, 00:10:23.680 "copy": false, 00:10:23.680 "nvme_iov_md": false 00:10:23.680 }, 00:10:23.680 "memory_domains": [ 00:10:23.680 { 00:10:23.680 "dma_device_id": "system", 00:10:23.680 "dma_device_type": 1 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.680 "dma_device_type": 2 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "system", 00:10:23.680 "dma_device_type": 1 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.680 "dma_device_type": 2 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "system", 00:10:23.680 "dma_device_type": 1 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.680 "dma_device_type": 2 00:10:23.680 } 00:10:23.680 ], 00:10:23.680 "driver_specific": { 00:10:23.680 "raid": { 00:10:23.680 "uuid": "82d5c37c-4a2e-11ef-9c8e-7947904e2597", 00:10:23.680 "strip_size_kb": 0, 00:10:23.680 "state": "online", 00:10:23.680 "raid_level": "raid1", 00:10:23.680 "superblock": false, 00:10:23.680 "num_base_bdevs": 3, 00:10:23.680 "num_base_bdevs_discovered": 3, 00:10:23.680 "num_base_bdevs_operational": 3, 00:10:23.680 "base_bdevs_list": [ 00:10:23.680 { 00:10:23.680 "name": "NewBaseBdev", 00:10:23.680 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:23.680 "is_configured": true, 00:10:23.680 "data_offset": 0, 00:10:23.680 "data_size": 65536 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "name": "BaseBdev2", 00:10:23.680 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:23.680 "is_configured": true, 00:10:23.680 "data_offset": 0, 00:10:23.680 "data_size": 65536 00:10:23.680 }, 00:10:23.680 { 00:10:23.680 "name": "BaseBdev3", 00:10:23.680 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:23.680 "is_configured": true, 00:10:23.680 "data_offset": 0, 00:10:23.680 "data_size": 65536 00:10:23.680 } 00:10:23.680 ] 00:10:23.680 } 00:10:23.680 } 00:10:23.680 }' 00:10:23.680 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:23.940 BaseBdev2 00:10:23.940 BaseBdev3' 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.940 "name": "NewBaseBdev", 00:10:23.940 "aliases": [ 00:10:23.940 "801dc9dd-4a2e-11ef-9c8e-7947904e2597" 00:10:23.940 ], 00:10:23.940 "product_name": "Malloc disk", 00:10:23.940 "block_size": 512, 00:10:23.940 "num_blocks": 65536, 00:10:23.940 "uuid": "801dc9dd-4a2e-11ef-9c8e-7947904e2597", 00:10:23.940 "assigned_rate_limits": { 00:10:23.940 "rw_ios_per_sec": 0, 00:10:23.940 "rw_mbytes_per_sec": 0, 00:10:23.940 "r_mbytes_per_sec": 0, 00:10:23.940 "w_mbytes_per_sec": 0 00:10:23.940 }, 00:10:23.940 "claimed": true, 00:10:23.940 "claim_type": "exclusive_write", 00:10:23.940 "zoned": false, 00:10:23.940 "supported_io_types": { 00:10:23.940 "read": true, 00:10:23.940 "write": true, 00:10:23.940 "unmap": true, 00:10:23.940 "flush": true, 00:10:23.940 "reset": true, 00:10:23.940 "nvme_admin": false, 00:10:23.940 "nvme_io": false, 00:10:23.940 "nvme_io_md": false, 00:10:23.940 "write_zeroes": true, 00:10:23.940 "zcopy": true, 00:10:23.940 "get_zone_info": false, 00:10:23.940 "zone_management": false, 00:10:23.940 "zone_append": false, 00:10:23.940 "compare": false, 00:10:23.940 "compare_and_write": false, 00:10:23.940 "abort": true, 00:10:23.940 "seek_hole": false, 00:10:23.940 "seek_data": false, 00:10:23.940 "copy": true, 00:10:23.940 "nvme_iov_md": false 00:10:23.940 }, 00:10:23.940 "memory_domains": [ 00:10:23.940 { 00:10:23.940 "dma_device_id": "system", 00:10:23.940 "dma_device_type": 1 00:10:23.940 }, 00:10:23.940 { 00:10:23.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.940 "dma_device_type": 2 00:10:23.940 } 00:10:23.940 ], 00:10:23.940 "driver_specific": {} 00:10:23.940 }' 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.940 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.200 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.200 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.200 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:24.200 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:24.200 02:35:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:24.200 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:24.200 "name": "BaseBdev2", 00:10:24.200 "aliases": [ 00:10:24.200 "7ea049c5-4a2e-11ef-9c8e-7947904e2597" 00:10:24.200 ], 00:10:24.200 "product_name": "Malloc disk", 00:10:24.200 "block_size": 512, 00:10:24.200 "num_blocks": 65536, 00:10:24.200 "uuid": "7ea049c5-4a2e-11ef-9c8e-7947904e2597", 00:10:24.200 "assigned_rate_limits": { 00:10:24.200 "rw_ios_per_sec": 0, 00:10:24.200 "rw_mbytes_per_sec": 0, 00:10:24.200 "r_mbytes_per_sec": 0, 00:10:24.200 "w_mbytes_per_sec": 0 00:10:24.200 }, 00:10:24.200 "claimed": true, 00:10:24.200 "claim_type": "exclusive_write", 00:10:24.200 "zoned": false, 00:10:24.200 "supported_io_types": { 00:10:24.200 "read": true, 00:10:24.200 "write": true, 00:10:24.200 "unmap": true, 00:10:24.200 "flush": true, 00:10:24.200 "reset": true, 00:10:24.200 "nvme_admin": false, 00:10:24.200 "nvme_io": false, 00:10:24.200 "nvme_io_md": false, 00:10:24.200 "write_zeroes": true, 00:10:24.200 "zcopy": true, 00:10:24.200 "get_zone_info": false, 00:10:24.200 "zone_management": false, 00:10:24.200 "zone_append": false, 00:10:24.201 "compare": false, 00:10:24.201 "compare_and_write": false, 00:10:24.201 "abort": true, 00:10:24.201 "seek_hole": false, 00:10:24.201 "seek_data": false, 00:10:24.201 "copy": true, 00:10:24.201 "nvme_iov_md": false 00:10:24.201 }, 00:10:24.201 "memory_domains": [ 00:10:24.201 { 00:10:24.201 "dma_device_id": "system", 00:10:24.201 "dma_device_type": 1 00:10:24.201 }, 00:10:24.201 { 00:10:24.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.201 "dma_device_type": 2 00:10:24.201 } 00:10:24.201 ], 00:10:24.201 "driver_specific": {} 00:10:24.201 }' 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.201 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:24.461 "name": "BaseBdev3", 00:10:24.461 "aliases": [ 00:10:24.461 "7ef3e975-4a2e-11ef-9c8e-7947904e2597" 00:10:24.461 ], 00:10:24.461 "product_name": "Malloc disk", 00:10:24.461 "block_size": 512, 00:10:24.461 "num_blocks": 65536, 00:10:24.461 "uuid": "7ef3e975-4a2e-11ef-9c8e-7947904e2597", 00:10:24.461 "assigned_rate_limits": { 00:10:24.461 "rw_ios_per_sec": 0, 00:10:24.461 "rw_mbytes_per_sec": 0, 00:10:24.461 "r_mbytes_per_sec": 0, 00:10:24.461 "w_mbytes_per_sec": 0 00:10:24.461 }, 00:10:24.461 "claimed": true, 00:10:24.461 "claim_type": "exclusive_write", 00:10:24.461 "zoned": false, 00:10:24.461 "supported_io_types": { 00:10:24.461 "read": true, 00:10:24.461 "write": true, 00:10:24.461 "unmap": true, 00:10:24.461 "flush": true, 00:10:24.461 "reset": true, 00:10:24.461 "nvme_admin": false, 00:10:24.461 "nvme_io": false, 00:10:24.461 "nvme_io_md": false, 00:10:24.461 "write_zeroes": true, 00:10:24.461 "zcopy": true, 00:10:24.461 "get_zone_info": false, 00:10:24.461 "zone_management": false, 00:10:24.461 "zone_append": false, 00:10:24.461 "compare": false, 00:10:24.461 "compare_and_write": false, 00:10:24.461 "abort": true, 00:10:24.461 "seek_hole": false, 00:10:24.461 "seek_data": false, 00:10:24.461 "copy": true, 00:10:24.461 "nvme_iov_md": false 00:10:24.461 }, 00:10:24.461 "memory_domains": [ 00:10:24.461 { 00:10:24.461 "dma_device_id": "system", 00:10:24.461 "dma_device_type": 1 00:10:24.461 }, 00:10:24.461 { 00:10:24.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.461 "dma_device_type": 2 00:10:24.461 } 00:10:24.461 ], 00:10:24.461 "driver_specific": {} 00:10:24.461 }' 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:24.461 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:24.721 [2024-07-25 02:35:11.577343] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.721 [2024-07-25 02:35:11.577358] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.721 [2024-07-25 02:35:11.577371] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.721 [2024-07-25 02:35:11.577430] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.721 [2024-07-25 02:35:11.577433] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10b97f434f00 name Existed_Raid, state offline 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 55870 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 55870 ']' 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 55870 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 55870 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:24.721 killing process with pid 55870 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55870' 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 55870 00:10:24.721 [2024-07-25 02:35:11.606856] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.721 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 55870 00:10:24.721 [2024-07-25 02:35:11.620800] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:24.981 00:10:24.981 real 0m17.619s 00:10:24.981 user 0m31.523s 00:10:24.981 sys 0m3.081s 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.981 ************************************ 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.981 END TEST raid_state_function_test 00:10:24.981 ************************************ 00:10:24.981 02:35:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:24.981 02:35:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:24.981 02:35:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:24.981 02:35:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.981 02:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.981 ************************************ 00:10:24.981 START TEST raid_state_function_test_sb 00:10:24.981 ************************************ 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:24.981 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56575 00:10:24.982 Process raid pid: 56575 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56575' 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56575 /var/tmp/spdk-raid.sock 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56575 ']' 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.982 02:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.982 [2024-07-25 02:35:11.872619] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:10:24.982 [2024-07-25 02:35:11.872938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:25.551 EAL: TSC is not safe to use in SMP mode 00:10:25.551 EAL: TSC is not invariant 00:10:25.551 [2024-07-25 02:35:12.288024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.551 [2024-07-25 02:35:12.379553] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:25.551 [2024-07-25 02:35:12.381193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.551 [2024-07-25 02:35:12.381790] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.551 [2024-07-25 02:35:12.381799] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.120 [2024-07-25 02:35:12.932766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.120 [2024-07-25 02:35:12.932799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.120 [2024-07-25 02:35:12.932819] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.120 [2024-07-25 02:35:12.932824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.120 [2024-07-25 02:35:12.932827] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.120 [2024-07-25 02:35:12.932832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.120 02:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.380 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:26.380 "name": "Existed_Raid", 00:10:26.380 "uuid": "84d5c403-4a2e-11ef-9c8e-7947904e2597", 00:10:26.380 "strip_size_kb": 0, 00:10:26.380 "state": "configuring", 00:10:26.380 "raid_level": "raid1", 00:10:26.380 "superblock": true, 00:10:26.380 "num_base_bdevs": 3, 00:10:26.380 "num_base_bdevs_discovered": 0, 00:10:26.380 "num_base_bdevs_operational": 3, 00:10:26.380 "base_bdevs_list": [ 00:10:26.380 { 00:10:26.380 "name": "BaseBdev1", 00:10:26.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.380 "is_configured": false, 00:10:26.380 "data_offset": 0, 00:10:26.380 "data_size": 0 00:10:26.380 }, 00:10:26.380 { 00:10:26.380 "name": "BaseBdev2", 00:10:26.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.380 "is_configured": false, 00:10:26.380 "data_offset": 0, 00:10:26.380 "data_size": 0 00:10:26.380 }, 00:10:26.380 { 00:10:26.380 "name": "BaseBdev3", 00:10:26.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.380 "is_configured": false, 00:10:26.380 "data_offset": 0, 00:10:26.380 "data_size": 0 00:10:26.380 } 00:10:26.380 ] 00:10:26.380 }' 00:10:26.380 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:26.380 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.638 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:26.897 [2024-07-25 02:35:13.564788] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.897 [2024-07-25 02:35:13.564801] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341f9c834500 name Existed_Raid, state configuring 00:10:26.897 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.897 [2024-07-25 02:35:13.748808] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.897 [2024-07-25 02:35:13.748832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.897 [2024-07-25 02:35:13.748835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.897 [2024-07-25 02:35:13.748856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.897 [2024-07-25 02:35:13.748859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.897 [2024-07-25 02:35:13.748865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.897 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:27.156 [2024-07-25 02:35:13.929693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.156 BaseBdev1 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:27.156 02:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.415 [ 00:10:27.415 { 00:10:27.415 "name": "BaseBdev1", 00:10:27.415 "aliases": [ 00:10:27.415 "856dc0e3-4a2e-11ef-9c8e-7947904e2597" 00:10:27.415 ], 00:10:27.415 "product_name": "Malloc disk", 00:10:27.415 "block_size": 512, 00:10:27.415 "num_blocks": 65536, 00:10:27.415 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:27.415 "assigned_rate_limits": { 00:10:27.415 "rw_ios_per_sec": 0, 00:10:27.415 "rw_mbytes_per_sec": 0, 00:10:27.415 "r_mbytes_per_sec": 0, 00:10:27.415 "w_mbytes_per_sec": 0 00:10:27.415 }, 00:10:27.415 "claimed": true, 00:10:27.415 "claim_type": "exclusive_write", 00:10:27.415 "zoned": false, 00:10:27.415 "supported_io_types": { 00:10:27.415 "read": true, 00:10:27.415 "write": true, 00:10:27.415 "unmap": true, 00:10:27.415 "flush": true, 00:10:27.415 "reset": true, 00:10:27.415 "nvme_admin": false, 00:10:27.415 "nvme_io": false, 00:10:27.415 "nvme_io_md": false, 00:10:27.415 "write_zeroes": true, 00:10:27.415 "zcopy": true, 00:10:27.415 "get_zone_info": false, 00:10:27.415 "zone_management": false, 00:10:27.415 "zone_append": false, 00:10:27.415 "compare": false, 00:10:27.415 "compare_and_write": false, 00:10:27.415 "abort": true, 00:10:27.415 "seek_hole": false, 00:10:27.415 "seek_data": false, 00:10:27.415 "copy": true, 00:10:27.415 "nvme_iov_md": false 00:10:27.415 }, 00:10:27.415 "memory_domains": [ 00:10:27.415 { 00:10:27.415 "dma_device_id": "system", 00:10:27.415 "dma_device_type": 1 00:10:27.415 }, 00:10:27.415 { 00:10:27.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.415 "dma_device_type": 2 00:10:27.415 } 00:10:27.415 ], 00:10:27.415 "driver_specific": {} 00:10:27.415 } 00:10:27.415 ] 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.415 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.683 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.683 "name": "Existed_Raid", 00:10:27.683 "uuid": "855248c1-4a2e-11ef-9c8e-7947904e2597", 00:10:27.683 "strip_size_kb": 0, 00:10:27.683 "state": "configuring", 00:10:27.683 "raid_level": "raid1", 00:10:27.683 "superblock": true, 00:10:27.683 "num_base_bdevs": 3, 00:10:27.683 "num_base_bdevs_discovered": 1, 00:10:27.683 "num_base_bdevs_operational": 3, 00:10:27.683 "base_bdevs_list": [ 00:10:27.683 { 00:10:27.683 "name": "BaseBdev1", 00:10:27.683 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:27.683 "is_configured": true, 00:10:27.683 "data_offset": 2048, 00:10:27.683 "data_size": 63488 00:10:27.683 }, 00:10:27.683 { 00:10:27.683 "name": "BaseBdev2", 00:10:27.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.683 "is_configured": false, 00:10:27.683 "data_offset": 0, 00:10:27.683 "data_size": 0 00:10:27.683 }, 00:10:27.683 { 00:10:27.683 "name": "BaseBdev3", 00:10:27.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.683 "is_configured": false, 00:10:27.683 "data_offset": 0, 00:10:27.683 "data_size": 0 00:10:27.683 } 00:10:27.683 ] 00:10:27.683 }' 00:10:27.683 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.683 02:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.953 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.212 [2024-07-25 02:35:14.900902] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.212 [2024-07-25 02:35:14.900921] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341f9c834500 name Existed_Raid, state configuring 00:10:28.212 02:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:28.212 [2024-07-25 02:35:15.080928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.212 [2024-07-25 02:35:15.081601] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.212 [2024-07-25 02:35:15.081636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.212 [2024-07-25 02:35:15.081640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.212 [2024-07-25 02:35:15.081646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.212 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.471 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.471 "name": "Existed_Raid", 00:10:28.471 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:28.471 "strip_size_kb": 0, 00:10:28.471 "state": "configuring", 00:10:28.471 "raid_level": "raid1", 00:10:28.471 "superblock": true, 00:10:28.471 "num_base_bdevs": 3, 00:10:28.471 "num_base_bdevs_discovered": 1, 00:10:28.471 "num_base_bdevs_operational": 3, 00:10:28.471 "base_bdevs_list": [ 00:10:28.471 { 00:10:28.471 "name": "BaseBdev1", 00:10:28.471 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:28.471 "is_configured": true, 00:10:28.471 "data_offset": 2048, 00:10:28.471 "data_size": 63488 00:10:28.471 }, 00:10:28.471 { 00:10:28.471 "name": "BaseBdev2", 00:10:28.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.471 "is_configured": false, 00:10:28.471 "data_offset": 0, 00:10:28.471 "data_size": 0 00:10:28.471 }, 00:10:28.472 { 00:10:28.472 "name": "BaseBdev3", 00:10:28.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.472 "is_configured": false, 00:10:28.472 "data_offset": 0, 00:10:28.472 "data_size": 0 00:10:28.472 } 00:10:28.472 ] 00:10:28.472 }' 00:10:28.472 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.472 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.731 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.990 [2024-07-25 02:35:15.709080] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.990 BaseBdev2 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:28.990 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.249 02:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.249 [ 00:10:29.249 { 00:10:29.249 "name": "BaseBdev2", 00:10:29.249 "aliases": [ 00:10:29.249 "867d6237-4a2e-11ef-9c8e-7947904e2597" 00:10:29.249 ], 00:10:29.249 "product_name": "Malloc disk", 00:10:29.249 "block_size": 512, 00:10:29.249 "num_blocks": 65536, 00:10:29.249 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:29.249 "assigned_rate_limits": { 00:10:29.249 "rw_ios_per_sec": 0, 00:10:29.249 "rw_mbytes_per_sec": 0, 00:10:29.249 "r_mbytes_per_sec": 0, 00:10:29.249 "w_mbytes_per_sec": 0 00:10:29.249 }, 00:10:29.249 "claimed": true, 00:10:29.249 "claim_type": "exclusive_write", 00:10:29.249 "zoned": false, 00:10:29.249 "supported_io_types": { 00:10:29.249 "read": true, 00:10:29.249 "write": true, 00:10:29.249 "unmap": true, 00:10:29.249 "flush": true, 00:10:29.249 "reset": true, 00:10:29.249 "nvme_admin": false, 00:10:29.249 "nvme_io": false, 00:10:29.249 "nvme_io_md": false, 00:10:29.249 "write_zeroes": true, 00:10:29.249 "zcopy": true, 00:10:29.249 "get_zone_info": false, 00:10:29.249 "zone_management": false, 00:10:29.249 "zone_append": false, 00:10:29.249 "compare": false, 00:10:29.249 "compare_and_write": false, 00:10:29.249 "abort": true, 00:10:29.249 "seek_hole": false, 00:10:29.249 "seek_data": false, 00:10:29.249 "copy": true, 00:10:29.249 "nvme_iov_md": false 00:10:29.249 }, 00:10:29.249 "memory_domains": [ 00:10:29.249 { 00:10:29.249 "dma_device_id": "system", 00:10:29.249 "dma_device_type": 1 00:10:29.249 }, 00:10:29.249 { 00:10:29.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.249 "dma_device_type": 2 00:10:29.249 } 00:10:29.249 ], 00:10:29.249 "driver_specific": {} 00:10:29.249 } 00:10:29.249 ] 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.249 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.509 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.509 "name": "Existed_Raid", 00:10:29.509 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:29.509 "strip_size_kb": 0, 00:10:29.509 "state": "configuring", 00:10:29.509 "raid_level": "raid1", 00:10:29.509 "superblock": true, 00:10:29.509 "num_base_bdevs": 3, 00:10:29.509 "num_base_bdevs_discovered": 2, 00:10:29.509 "num_base_bdevs_operational": 3, 00:10:29.509 "base_bdevs_list": [ 00:10:29.509 { 00:10:29.509 "name": "BaseBdev1", 00:10:29.509 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:29.509 "is_configured": true, 00:10:29.509 "data_offset": 2048, 00:10:29.509 "data_size": 63488 00:10:29.509 }, 00:10:29.509 { 00:10:29.509 "name": "BaseBdev2", 00:10:29.509 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:29.509 "is_configured": true, 00:10:29.509 "data_offset": 2048, 00:10:29.509 "data_size": 63488 00:10:29.509 }, 00:10:29.509 { 00:10:29.509 "name": "BaseBdev3", 00:10:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.509 "is_configured": false, 00:10:29.509 "data_offset": 0, 00:10:29.509 "data_size": 0 00:10:29.509 } 00:10:29.509 ] 00:10:29.509 }' 00:10:29.509 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.509 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.768 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.028 [2024-07-25 02:35:16.709155] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.028 [2024-07-25 02:35:16.709219] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x341f9c834a00 00:10:30.028 [2024-07-25 02:35:16.709223] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.028 [2024-07-25 02:35:16.709241] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x341f9c897e20 00:10:30.028 [2024-07-25 02:35:16.709279] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x341f9c834a00 00:10:30.028 [2024-07-25 02:35:16.709282] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x341f9c834a00 00:10:30.028 [2024-07-25 02:35:16.709296] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.028 BaseBdev3 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.028 02:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.287 [ 00:10:30.287 { 00:10:30.287 "name": "BaseBdev3", 00:10:30.287 "aliases": [ 00:10:30.287 "8715fc67-4a2e-11ef-9c8e-7947904e2597" 00:10:30.287 ], 00:10:30.287 "product_name": "Malloc disk", 00:10:30.287 "block_size": 512, 00:10:30.287 "num_blocks": 65536, 00:10:30.287 "uuid": "8715fc67-4a2e-11ef-9c8e-7947904e2597", 00:10:30.287 "assigned_rate_limits": { 00:10:30.287 "rw_ios_per_sec": 0, 00:10:30.287 "rw_mbytes_per_sec": 0, 00:10:30.287 "r_mbytes_per_sec": 0, 00:10:30.287 "w_mbytes_per_sec": 0 00:10:30.287 }, 00:10:30.287 "claimed": true, 00:10:30.287 "claim_type": "exclusive_write", 00:10:30.287 "zoned": false, 00:10:30.287 "supported_io_types": { 00:10:30.287 "read": true, 00:10:30.287 "write": true, 00:10:30.287 "unmap": true, 00:10:30.287 "flush": true, 00:10:30.287 "reset": true, 00:10:30.287 "nvme_admin": false, 00:10:30.287 "nvme_io": false, 00:10:30.287 "nvme_io_md": false, 00:10:30.287 "write_zeroes": true, 00:10:30.287 "zcopy": true, 00:10:30.287 "get_zone_info": false, 00:10:30.287 "zone_management": false, 00:10:30.287 "zone_append": false, 00:10:30.287 "compare": false, 00:10:30.287 "compare_and_write": false, 00:10:30.287 "abort": true, 00:10:30.287 "seek_hole": false, 00:10:30.287 "seek_data": false, 00:10:30.287 "copy": true, 00:10:30.287 "nvme_iov_md": false 00:10:30.287 }, 00:10:30.287 "memory_domains": [ 00:10:30.287 { 00:10:30.287 "dma_device_id": "system", 00:10:30.287 "dma_device_type": 1 00:10:30.287 }, 00:10:30.287 { 00:10:30.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.287 "dma_device_type": 2 00:10:30.287 } 00:10:30.287 ], 00:10:30.287 "driver_specific": {} 00:10:30.287 } 00:10:30.287 ] 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.287 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.546 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.546 "name": "Existed_Raid", 00:10:30.546 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:30.546 "strip_size_kb": 0, 00:10:30.546 "state": "online", 00:10:30.546 "raid_level": "raid1", 00:10:30.546 "superblock": true, 00:10:30.546 "num_base_bdevs": 3, 00:10:30.546 "num_base_bdevs_discovered": 3, 00:10:30.546 "num_base_bdevs_operational": 3, 00:10:30.546 "base_bdevs_list": [ 00:10:30.546 { 00:10:30.546 "name": "BaseBdev1", 00:10:30.546 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:30.546 "is_configured": true, 00:10:30.546 "data_offset": 2048, 00:10:30.546 "data_size": 63488 00:10:30.546 }, 00:10:30.546 { 00:10:30.546 "name": "BaseBdev2", 00:10:30.546 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:30.546 "is_configured": true, 00:10:30.546 "data_offset": 2048, 00:10:30.546 "data_size": 63488 00:10:30.546 }, 00:10:30.546 { 00:10:30.546 "name": "BaseBdev3", 00:10:30.546 "uuid": "8715fc67-4a2e-11ef-9c8e-7947904e2597", 00:10:30.546 "is_configured": true, 00:10:30.546 "data_offset": 2048, 00:10:30.546 "data_size": 63488 00:10:30.546 } 00:10:30.546 ] 00:10:30.546 }' 00:10:30.546 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.546 02:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:30.806 [2024-07-25 02:35:17.661166] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:30.806 "name": "Existed_Raid", 00:10:30.806 "aliases": [ 00:10:30.806 "861d8c9d-4a2e-11ef-9c8e-7947904e2597" 00:10:30.806 ], 00:10:30.806 "product_name": "Raid Volume", 00:10:30.806 "block_size": 512, 00:10:30.806 "num_blocks": 63488, 00:10:30.806 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:30.806 "assigned_rate_limits": { 00:10:30.806 "rw_ios_per_sec": 0, 00:10:30.806 "rw_mbytes_per_sec": 0, 00:10:30.806 "r_mbytes_per_sec": 0, 00:10:30.806 "w_mbytes_per_sec": 0 00:10:30.806 }, 00:10:30.806 "claimed": false, 00:10:30.806 "zoned": false, 00:10:30.806 "supported_io_types": { 00:10:30.806 "read": true, 00:10:30.806 "write": true, 00:10:30.806 "unmap": false, 00:10:30.806 "flush": false, 00:10:30.806 "reset": true, 00:10:30.806 "nvme_admin": false, 00:10:30.806 "nvme_io": false, 00:10:30.806 "nvme_io_md": false, 00:10:30.806 "write_zeroes": true, 00:10:30.806 "zcopy": false, 00:10:30.806 "get_zone_info": false, 00:10:30.806 "zone_management": false, 00:10:30.806 "zone_append": false, 00:10:30.806 "compare": false, 00:10:30.806 "compare_and_write": false, 00:10:30.806 "abort": false, 00:10:30.806 "seek_hole": false, 00:10:30.806 "seek_data": false, 00:10:30.806 "copy": false, 00:10:30.806 "nvme_iov_md": false 00:10:30.806 }, 00:10:30.806 "memory_domains": [ 00:10:30.806 { 00:10:30.806 "dma_device_id": "system", 00:10:30.806 "dma_device_type": 1 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.806 "dma_device_type": 2 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "dma_device_id": "system", 00:10:30.806 "dma_device_type": 1 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.806 "dma_device_type": 2 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "dma_device_id": "system", 00:10:30.806 "dma_device_type": 1 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.806 "dma_device_type": 2 00:10:30.806 } 00:10:30.806 ], 00:10:30.806 "driver_specific": { 00:10:30.806 "raid": { 00:10:30.806 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:30.806 "strip_size_kb": 0, 00:10:30.806 "state": "online", 00:10:30.806 "raid_level": "raid1", 00:10:30.806 "superblock": true, 00:10:30.806 "num_base_bdevs": 3, 00:10:30.806 "num_base_bdevs_discovered": 3, 00:10:30.806 "num_base_bdevs_operational": 3, 00:10:30.806 "base_bdevs_list": [ 00:10:30.806 { 00:10:30.806 "name": "BaseBdev1", 00:10:30.806 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:30.806 "is_configured": true, 00:10:30.806 "data_offset": 2048, 00:10:30.806 "data_size": 63488 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "name": "BaseBdev2", 00:10:30.806 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:30.806 "is_configured": true, 00:10:30.806 "data_offset": 2048, 00:10:30.806 "data_size": 63488 00:10:30.806 }, 00:10:30.806 { 00:10:30.806 "name": "BaseBdev3", 00:10:30.806 "uuid": "8715fc67-4a2e-11ef-9c8e-7947904e2597", 00:10:30.806 "is_configured": true, 00:10:30.806 "data_offset": 2048, 00:10:30.806 "data_size": 63488 00:10:30.806 } 00:10:30.806 ] 00:10:30.806 } 00:10:30.806 } 00:10:30.806 }' 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:30.806 BaseBdev2 00:10:30.806 BaseBdev3' 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:30.806 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.066 "name": "BaseBdev1", 00:10:31.066 "aliases": [ 00:10:31.066 "856dc0e3-4a2e-11ef-9c8e-7947904e2597" 00:10:31.066 ], 00:10:31.066 "product_name": "Malloc disk", 00:10:31.066 "block_size": 512, 00:10:31.066 "num_blocks": 65536, 00:10:31.066 "uuid": "856dc0e3-4a2e-11ef-9c8e-7947904e2597", 00:10:31.066 "assigned_rate_limits": { 00:10:31.066 "rw_ios_per_sec": 0, 00:10:31.066 "rw_mbytes_per_sec": 0, 00:10:31.066 "r_mbytes_per_sec": 0, 00:10:31.066 "w_mbytes_per_sec": 0 00:10:31.066 }, 00:10:31.066 "claimed": true, 00:10:31.066 "claim_type": "exclusive_write", 00:10:31.066 "zoned": false, 00:10:31.066 "supported_io_types": { 00:10:31.066 "read": true, 00:10:31.066 "write": true, 00:10:31.066 "unmap": true, 00:10:31.066 "flush": true, 00:10:31.066 "reset": true, 00:10:31.066 "nvme_admin": false, 00:10:31.066 "nvme_io": false, 00:10:31.066 "nvme_io_md": false, 00:10:31.066 "write_zeroes": true, 00:10:31.066 "zcopy": true, 00:10:31.066 "get_zone_info": false, 00:10:31.066 "zone_management": false, 00:10:31.066 "zone_append": false, 00:10:31.066 "compare": false, 00:10:31.066 "compare_and_write": false, 00:10:31.066 "abort": true, 00:10:31.066 "seek_hole": false, 00:10:31.066 "seek_data": false, 00:10:31.066 "copy": true, 00:10:31.066 "nvme_iov_md": false 00:10:31.066 }, 00:10:31.066 "memory_domains": [ 00:10:31.066 { 00:10:31.066 "dma_device_id": "system", 00:10:31.066 "dma_device_type": 1 00:10:31.066 }, 00:10:31.066 { 00:10:31.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.066 "dma_device_type": 2 00:10:31.066 } 00:10:31.066 ], 00:10:31.066 "driver_specific": {} 00:10:31.066 }' 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:31.066 02:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.325 "name": "BaseBdev2", 00:10:31.325 "aliases": [ 00:10:31.325 "867d6237-4a2e-11ef-9c8e-7947904e2597" 00:10:31.325 ], 00:10:31.325 "product_name": "Malloc disk", 00:10:31.325 "block_size": 512, 00:10:31.325 "num_blocks": 65536, 00:10:31.325 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:31.325 "assigned_rate_limits": { 00:10:31.325 "rw_ios_per_sec": 0, 00:10:31.325 "rw_mbytes_per_sec": 0, 00:10:31.325 "r_mbytes_per_sec": 0, 00:10:31.325 "w_mbytes_per_sec": 0 00:10:31.325 }, 00:10:31.325 "claimed": true, 00:10:31.325 "claim_type": "exclusive_write", 00:10:31.325 "zoned": false, 00:10:31.325 "supported_io_types": { 00:10:31.325 "read": true, 00:10:31.325 "write": true, 00:10:31.325 "unmap": true, 00:10:31.325 "flush": true, 00:10:31.325 "reset": true, 00:10:31.325 "nvme_admin": false, 00:10:31.325 "nvme_io": false, 00:10:31.325 "nvme_io_md": false, 00:10:31.325 "write_zeroes": true, 00:10:31.325 "zcopy": true, 00:10:31.325 "get_zone_info": false, 00:10:31.325 "zone_management": false, 00:10:31.325 "zone_append": false, 00:10:31.325 "compare": false, 00:10:31.325 "compare_and_write": false, 00:10:31.325 "abort": true, 00:10:31.325 "seek_hole": false, 00:10:31.325 "seek_data": false, 00:10:31.325 "copy": true, 00:10:31.325 "nvme_iov_md": false 00:10:31.325 }, 00:10:31.325 "memory_domains": [ 00:10:31.325 { 00:10:31.325 "dma_device_id": "system", 00:10:31.325 "dma_device_type": 1 00:10:31.325 }, 00:10:31.325 { 00:10:31.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.325 "dma_device_type": 2 00:10:31.325 } 00:10:31.325 ], 00:10:31.325 "driver_specific": {} 00:10:31.325 }' 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.325 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.326 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.326 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:31.326 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.585 "name": "BaseBdev3", 00:10:31.585 "aliases": [ 00:10:31.585 "8715fc67-4a2e-11ef-9c8e-7947904e2597" 00:10:31.585 ], 00:10:31.585 "product_name": "Malloc disk", 00:10:31.585 "block_size": 512, 00:10:31.585 "num_blocks": 65536, 00:10:31.585 "uuid": "8715fc67-4a2e-11ef-9c8e-7947904e2597", 00:10:31.585 "assigned_rate_limits": { 00:10:31.585 "rw_ios_per_sec": 0, 00:10:31.585 "rw_mbytes_per_sec": 0, 00:10:31.585 "r_mbytes_per_sec": 0, 00:10:31.585 "w_mbytes_per_sec": 0 00:10:31.585 }, 00:10:31.585 "claimed": true, 00:10:31.585 "claim_type": "exclusive_write", 00:10:31.585 "zoned": false, 00:10:31.585 "supported_io_types": { 00:10:31.585 "read": true, 00:10:31.585 "write": true, 00:10:31.585 "unmap": true, 00:10:31.585 "flush": true, 00:10:31.585 "reset": true, 00:10:31.585 "nvme_admin": false, 00:10:31.585 "nvme_io": false, 00:10:31.585 "nvme_io_md": false, 00:10:31.585 "write_zeroes": true, 00:10:31.585 "zcopy": true, 00:10:31.585 "get_zone_info": false, 00:10:31.585 "zone_management": false, 00:10:31.585 "zone_append": false, 00:10:31.585 "compare": false, 00:10:31.585 "compare_and_write": false, 00:10:31.585 "abort": true, 00:10:31.585 "seek_hole": false, 00:10:31.585 "seek_data": false, 00:10:31.585 "copy": true, 00:10:31.585 "nvme_iov_md": false 00:10:31.585 }, 00:10:31.585 "memory_domains": [ 00:10:31.585 { 00:10:31.585 "dma_device_id": "system", 00:10:31.585 "dma_device_type": 1 00:10:31.585 }, 00:10:31.585 { 00:10:31.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.585 "dma_device_type": 2 00:10:31.585 } 00:10:31.585 ], 00:10:31.585 "driver_specific": {} 00:10:31.585 }' 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.585 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:31.844 [2024-07-25 02:35:18.673246] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.844 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.104 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.104 "name": "Existed_Raid", 00:10:32.104 "uuid": "861d8c9d-4a2e-11ef-9c8e-7947904e2597", 00:10:32.104 "strip_size_kb": 0, 00:10:32.104 "state": "online", 00:10:32.104 "raid_level": "raid1", 00:10:32.104 "superblock": true, 00:10:32.104 "num_base_bdevs": 3, 00:10:32.104 "num_base_bdevs_discovered": 2, 00:10:32.104 "num_base_bdevs_operational": 2, 00:10:32.104 "base_bdevs_list": [ 00:10:32.104 { 00:10:32.104 "name": null, 00:10:32.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.104 "is_configured": false, 00:10:32.104 "data_offset": 2048, 00:10:32.104 "data_size": 63488 00:10:32.104 }, 00:10:32.104 { 00:10:32.104 "name": "BaseBdev2", 00:10:32.104 "uuid": "867d6237-4a2e-11ef-9c8e-7947904e2597", 00:10:32.104 "is_configured": true, 00:10:32.104 "data_offset": 2048, 00:10:32.104 "data_size": 63488 00:10:32.104 }, 00:10:32.104 { 00:10:32.104 "name": "BaseBdev3", 00:10:32.104 "uuid": "8715fc67-4a2e-11ef-9c8e-7947904e2597", 00:10:32.104 "is_configured": true, 00:10:32.104 "data_offset": 2048, 00:10:32.104 "data_size": 63488 00:10:32.104 } 00:10:32.104 ] 00:10:32.104 }' 00:10:32.104 02:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.104 02:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.364 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:32.364 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:32.364 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.364 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:32.624 [2024-07-25 02:35:19.498004] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.624 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:32.883 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:32.883 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:32.883 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:33.143 [2024-07-25 02:35:19.866702] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.143 [2024-07-25 02:35:19.866722] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.143 [2024-07-25 02:35:19.871459] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.143 [2024-07-25 02:35:19.871471] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.143 [2024-07-25 02:35:19.871474] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341f9c834a00 name Existed_Raid, state offline 00:10:33.143 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:33.143 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:33.143 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.143 02:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.403 BaseBdev2 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:33.403 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:33.663 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.930 [ 00:10:33.930 { 00:10:33.930 "name": "BaseBdev2", 00:10:33.930 "aliases": [ 00:10:33.930 "893011cb-4a2e-11ef-9c8e-7947904e2597" 00:10:33.930 ], 00:10:33.930 "product_name": "Malloc disk", 00:10:33.930 "block_size": 512, 00:10:33.930 "num_blocks": 65536, 00:10:33.930 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:33.930 "assigned_rate_limits": { 00:10:33.930 "rw_ios_per_sec": 0, 00:10:33.930 "rw_mbytes_per_sec": 0, 00:10:33.930 "r_mbytes_per_sec": 0, 00:10:33.930 "w_mbytes_per_sec": 0 00:10:33.930 }, 00:10:33.930 "claimed": false, 00:10:33.930 "zoned": false, 00:10:33.930 "supported_io_types": { 00:10:33.930 "read": true, 00:10:33.930 "write": true, 00:10:33.930 "unmap": true, 00:10:33.930 "flush": true, 00:10:33.930 "reset": true, 00:10:33.930 "nvme_admin": false, 00:10:33.930 "nvme_io": false, 00:10:33.930 "nvme_io_md": false, 00:10:33.930 "write_zeroes": true, 00:10:33.930 "zcopy": true, 00:10:33.930 "get_zone_info": false, 00:10:33.930 "zone_management": false, 00:10:33.930 "zone_append": false, 00:10:33.930 "compare": false, 00:10:33.930 "compare_and_write": false, 00:10:33.930 "abort": true, 00:10:33.930 "seek_hole": false, 00:10:33.930 "seek_data": false, 00:10:33.930 "copy": true, 00:10:33.930 "nvme_iov_md": false 00:10:33.930 }, 00:10:33.930 "memory_domains": [ 00:10:33.930 { 00:10:33.930 "dma_device_id": "system", 00:10:33.930 "dma_device_type": 1 00:10:33.930 }, 00:10:33.930 { 00:10:33.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.930 "dma_device_type": 2 00:10:33.930 } 00:10:33.930 ], 00:10:33.930 "driver_specific": {} 00:10:33.930 } 00:10:33.930 ] 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.930 BaseBdev3 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:33.930 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:34.191 02:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.449 [ 00:10:34.449 { 00:10:34.449 "name": "BaseBdev3", 00:10:34.449 "aliases": [ 00:10:34.449 "89827943-4a2e-11ef-9c8e-7947904e2597" 00:10:34.449 ], 00:10:34.449 "product_name": "Malloc disk", 00:10:34.449 "block_size": 512, 00:10:34.449 "num_blocks": 65536, 00:10:34.449 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:34.449 "assigned_rate_limits": { 00:10:34.449 "rw_ios_per_sec": 0, 00:10:34.449 "rw_mbytes_per_sec": 0, 00:10:34.449 "r_mbytes_per_sec": 0, 00:10:34.449 "w_mbytes_per_sec": 0 00:10:34.449 }, 00:10:34.449 "claimed": false, 00:10:34.449 "zoned": false, 00:10:34.449 "supported_io_types": { 00:10:34.449 "read": true, 00:10:34.449 "write": true, 00:10:34.449 "unmap": true, 00:10:34.449 "flush": true, 00:10:34.449 "reset": true, 00:10:34.449 "nvme_admin": false, 00:10:34.449 "nvme_io": false, 00:10:34.449 "nvme_io_md": false, 00:10:34.449 "write_zeroes": true, 00:10:34.450 "zcopy": true, 00:10:34.450 "get_zone_info": false, 00:10:34.450 "zone_management": false, 00:10:34.450 "zone_append": false, 00:10:34.450 "compare": false, 00:10:34.450 "compare_and_write": false, 00:10:34.450 "abort": true, 00:10:34.450 "seek_hole": false, 00:10:34.450 "seek_data": false, 00:10:34.450 "copy": true, 00:10:34.450 "nvme_iov_md": false 00:10:34.450 }, 00:10:34.450 "memory_domains": [ 00:10:34.450 { 00:10:34.450 "dma_device_id": "system", 00:10:34.450 "dma_device_type": 1 00:10:34.450 }, 00:10:34.450 { 00:10:34.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.450 "dma_device_type": 2 00:10:34.450 } 00:10:34.450 ], 00:10:34.450 "driver_specific": {} 00:10:34.450 } 00:10:34.450 ] 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:34.450 [2024-07-25 02:35:21.311549] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.450 [2024-07-25 02:35:21.311586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.450 [2024-07-25 02:35:21.311592] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.450 [2024-07-25 02:35:21.312073] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.450 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.708 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.708 "name": "Existed_Raid", 00:10:34.708 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:34.708 "strip_size_kb": 0, 00:10:34.708 "state": "configuring", 00:10:34.708 "raid_level": "raid1", 00:10:34.708 "superblock": true, 00:10:34.708 "num_base_bdevs": 3, 00:10:34.708 "num_base_bdevs_discovered": 2, 00:10:34.708 "num_base_bdevs_operational": 3, 00:10:34.708 "base_bdevs_list": [ 00:10:34.708 { 00:10:34.708 "name": "BaseBdev1", 00:10:34.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.708 "is_configured": false, 00:10:34.708 "data_offset": 0, 00:10:34.708 "data_size": 0 00:10:34.708 }, 00:10:34.708 { 00:10:34.708 "name": "BaseBdev2", 00:10:34.708 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:34.708 "is_configured": true, 00:10:34.708 "data_offset": 2048, 00:10:34.708 "data_size": 63488 00:10:34.708 }, 00:10:34.708 { 00:10:34.708 "name": "BaseBdev3", 00:10:34.708 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:34.708 "is_configured": true, 00:10:34.708 "data_offset": 2048, 00:10:34.708 "data_size": 63488 00:10:34.708 } 00:10:34.708 ] 00:10:34.708 }' 00:10:34.708 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.708 02:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.968 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:35.227 [2024-07-25 02:35:21.939593] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.227 02:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.486 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.486 "name": "Existed_Raid", 00:10:35.486 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:35.486 "strip_size_kb": 0, 00:10:35.486 "state": "configuring", 00:10:35.486 "raid_level": "raid1", 00:10:35.486 "superblock": true, 00:10:35.486 "num_base_bdevs": 3, 00:10:35.486 "num_base_bdevs_discovered": 1, 00:10:35.486 "num_base_bdevs_operational": 3, 00:10:35.486 "base_bdevs_list": [ 00:10:35.486 { 00:10:35.486 "name": "BaseBdev1", 00:10:35.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.486 "is_configured": false, 00:10:35.486 "data_offset": 0, 00:10:35.486 "data_size": 0 00:10:35.486 }, 00:10:35.486 { 00:10:35.486 "name": null, 00:10:35.486 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:35.486 "is_configured": false, 00:10:35.486 "data_offset": 2048, 00:10:35.486 "data_size": 63488 00:10:35.486 }, 00:10:35.486 { 00:10:35.486 "name": "BaseBdev3", 00:10:35.486 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:35.487 "is_configured": true, 00:10:35.487 "data_offset": 2048, 00:10:35.487 "data_size": 63488 00:10:35.487 } 00:10:35.487 ] 00:10:35.487 }' 00:10:35.487 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.487 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.747 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.747 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.747 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:35.747 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.035 [2024-07-25 02:35:22.739758] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.035 BaseBdev1 00:10:36.035 02:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:36.035 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:36.036 02:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.295 [ 00:10:36.295 { 00:10:36.295 "name": "BaseBdev1", 00:10:36.295 "aliases": [ 00:10:36.295 "8aae2e87-4a2e-11ef-9c8e-7947904e2597" 00:10:36.295 ], 00:10:36.295 "product_name": "Malloc disk", 00:10:36.295 "block_size": 512, 00:10:36.295 "num_blocks": 65536, 00:10:36.295 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:36.295 "assigned_rate_limits": { 00:10:36.295 "rw_ios_per_sec": 0, 00:10:36.295 "rw_mbytes_per_sec": 0, 00:10:36.295 "r_mbytes_per_sec": 0, 00:10:36.295 "w_mbytes_per_sec": 0 00:10:36.295 }, 00:10:36.295 "claimed": true, 00:10:36.295 "claim_type": "exclusive_write", 00:10:36.295 "zoned": false, 00:10:36.295 "supported_io_types": { 00:10:36.295 "read": true, 00:10:36.295 "write": true, 00:10:36.295 "unmap": true, 00:10:36.295 "flush": true, 00:10:36.295 "reset": true, 00:10:36.295 "nvme_admin": false, 00:10:36.295 "nvme_io": false, 00:10:36.295 "nvme_io_md": false, 00:10:36.295 "write_zeroes": true, 00:10:36.295 "zcopy": true, 00:10:36.295 "get_zone_info": false, 00:10:36.295 "zone_management": false, 00:10:36.295 "zone_append": false, 00:10:36.295 "compare": false, 00:10:36.295 "compare_and_write": false, 00:10:36.295 "abort": true, 00:10:36.295 "seek_hole": false, 00:10:36.295 "seek_data": false, 00:10:36.295 "copy": true, 00:10:36.295 "nvme_iov_md": false 00:10:36.295 }, 00:10:36.295 "memory_domains": [ 00:10:36.295 { 00:10:36.295 "dma_device_id": "system", 00:10:36.295 "dma_device_type": 1 00:10:36.295 }, 00:10:36.295 { 00:10:36.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.295 "dma_device_type": 2 00:10:36.295 } 00:10:36.295 ], 00:10:36.295 "driver_specific": {} 00:10:36.295 } 00:10:36.295 ] 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.295 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.555 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.555 "name": "Existed_Raid", 00:10:36.555 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:36.555 "strip_size_kb": 0, 00:10:36.555 "state": "configuring", 00:10:36.555 "raid_level": "raid1", 00:10:36.555 "superblock": true, 00:10:36.555 "num_base_bdevs": 3, 00:10:36.555 "num_base_bdevs_discovered": 2, 00:10:36.555 "num_base_bdevs_operational": 3, 00:10:36.555 "base_bdevs_list": [ 00:10:36.555 { 00:10:36.555 "name": "BaseBdev1", 00:10:36.555 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:36.555 "is_configured": true, 00:10:36.555 "data_offset": 2048, 00:10:36.555 "data_size": 63488 00:10:36.555 }, 00:10:36.555 { 00:10:36.555 "name": null, 00:10:36.555 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:36.555 "is_configured": false, 00:10:36.555 "data_offset": 2048, 00:10:36.555 "data_size": 63488 00:10:36.555 }, 00:10:36.555 { 00:10:36.555 "name": "BaseBdev3", 00:10:36.555 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:36.555 "is_configured": true, 00:10:36.555 "data_offset": 2048, 00:10:36.555 "data_size": 63488 00:10:36.555 } 00:10:36.555 ] 00:10:36.555 }' 00:10:36.555 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.555 02:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.814 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.814 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.814 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:36.814 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:37.164 [2024-07-25 02:35:23.887758] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.164 02:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.164 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.164 "name": "Existed_Raid", 00:10:37.164 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:37.164 "strip_size_kb": 0, 00:10:37.164 "state": "configuring", 00:10:37.164 "raid_level": "raid1", 00:10:37.164 "superblock": true, 00:10:37.164 "num_base_bdevs": 3, 00:10:37.164 "num_base_bdevs_discovered": 1, 00:10:37.164 "num_base_bdevs_operational": 3, 00:10:37.164 "base_bdevs_list": [ 00:10:37.164 { 00:10:37.164 "name": "BaseBdev1", 00:10:37.164 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:37.164 "is_configured": true, 00:10:37.164 "data_offset": 2048, 00:10:37.165 "data_size": 63488 00:10:37.165 }, 00:10:37.165 { 00:10:37.165 "name": null, 00:10:37.165 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:37.165 "is_configured": false, 00:10:37.165 "data_offset": 2048, 00:10:37.165 "data_size": 63488 00:10:37.165 }, 00:10:37.165 { 00:10:37.165 "name": null, 00:10:37.165 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:37.165 "is_configured": false, 00:10:37.165 "data_offset": 2048, 00:10:37.165 "data_size": 63488 00:10:37.165 } 00:10:37.165 ] 00:10:37.165 }' 00:10:37.165 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.165 02:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.738 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.739 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.739 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:37.739 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.997 [2024-07-25 02:35:24.695832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.997 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.997 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.998 "name": "Existed_Raid", 00:10:37.998 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:37.998 "strip_size_kb": 0, 00:10:37.998 "state": "configuring", 00:10:37.998 "raid_level": "raid1", 00:10:37.998 "superblock": true, 00:10:37.998 "num_base_bdevs": 3, 00:10:37.998 "num_base_bdevs_discovered": 2, 00:10:37.998 "num_base_bdevs_operational": 3, 00:10:37.998 "base_bdevs_list": [ 00:10:37.998 { 00:10:37.998 "name": "BaseBdev1", 00:10:37.998 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:37.998 "is_configured": true, 00:10:37.998 "data_offset": 2048, 00:10:37.998 "data_size": 63488 00:10:37.998 }, 00:10:37.998 { 00:10:37.998 "name": null, 00:10:37.998 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:37.998 "is_configured": false, 00:10:37.998 "data_offset": 2048, 00:10:37.998 "data_size": 63488 00:10:37.998 }, 00:10:37.998 { 00:10:37.998 "name": "BaseBdev3", 00:10:37.998 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:37.998 "is_configured": true, 00:10:37.998 "data_offset": 2048, 00:10:37.998 "data_size": 63488 00:10:37.998 } 00:10:37.998 ] 00:10:37.998 }' 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.998 02:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.257 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.516 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:38.516 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:38.775 [2024-07-25 02:35:25.491923] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.775 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.034 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.034 "name": "Existed_Raid", 00:10:39.034 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:39.034 "strip_size_kb": 0, 00:10:39.034 "state": "configuring", 00:10:39.034 "raid_level": "raid1", 00:10:39.034 "superblock": true, 00:10:39.034 "num_base_bdevs": 3, 00:10:39.034 "num_base_bdevs_discovered": 1, 00:10:39.034 "num_base_bdevs_operational": 3, 00:10:39.034 "base_bdevs_list": [ 00:10:39.034 { 00:10:39.034 "name": null, 00:10:39.034 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:39.034 "is_configured": false, 00:10:39.034 "data_offset": 2048, 00:10:39.034 "data_size": 63488 00:10:39.034 }, 00:10:39.034 { 00:10:39.034 "name": null, 00:10:39.034 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:39.034 "is_configured": false, 00:10:39.034 "data_offset": 2048, 00:10:39.034 "data_size": 63488 00:10:39.034 }, 00:10:39.034 { 00:10:39.034 "name": "BaseBdev3", 00:10:39.034 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:39.035 "is_configured": true, 00:10:39.035 "data_offset": 2048, 00:10:39.035 "data_size": 63488 00:10:39.035 } 00:10:39.035 ] 00:10:39.035 }' 00:10:39.035 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.035 02:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.294 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.294 02:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.294 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:39.294 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.554 [2024-07-25 02:35:26.312635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.554 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.813 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.813 "name": "Existed_Raid", 00:10:39.813 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:39.813 "strip_size_kb": 0, 00:10:39.813 "state": "configuring", 00:10:39.813 "raid_level": "raid1", 00:10:39.813 "superblock": true, 00:10:39.813 "num_base_bdevs": 3, 00:10:39.813 "num_base_bdevs_discovered": 2, 00:10:39.813 "num_base_bdevs_operational": 3, 00:10:39.813 "base_bdevs_list": [ 00:10:39.813 { 00:10:39.813 "name": null, 00:10:39.813 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:39.813 "is_configured": false, 00:10:39.813 "data_offset": 2048, 00:10:39.813 "data_size": 63488 00:10:39.813 }, 00:10:39.813 { 00:10:39.813 "name": "BaseBdev2", 00:10:39.813 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:39.813 "is_configured": true, 00:10:39.813 "data_offset": 2048, 00:10:39.813 "data_size": 63488 00:10:39.813 }, 00:10:39.813 { 00:10:39.813 "name": "BaseBdev3", 00:10:39.813 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:39.813 "is_configured": true, 00:10:39.813 "data_offset": 2048, 00:10:39.813 "data_size": 63488 00:10:39.813 } 00:10:39.813 ] 00:10:39.813 }' 00:10:39.813 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.813 02:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.073 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.073 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.073 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:40.073 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.073 02:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:40.332 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8aae2e87-4a2e-11ef-9c8e-7947904e2597 00:10:40.591 [2024-07-25 02:35:27.312817] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:40.591 [2024-07-25 02:35:27.312852] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x341f9c834f00 00:10:40.591 [2024-07-25 02:35:27.312872] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.591 [2024-07-25 02:35:27.312887] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x341f9c897e20 00:10:40.591 [2024-07-25 02:35:27.312918] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x341f9c834f00 00:10:40.591 [2024-07-25 02:35:27.312921] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x341f9c834f00 00:10:40.591 [2024-07-25 02:35:27.312935] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.591 NewBaseBdev 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:40.591 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:40.851 [ 00:10:40.851 { 00:10:40.851 "name": "NewBaseBdev", 00:10:40.851 "aliases": [ 00:10:40.851 "8aae2e87-4a2e-11ef-9c8e-7947904e2597" 00:10:40.851 ], 00:10:40.851 "product_name": "Malloc disk", 00:10:40.851 "block_size": 512, 00:10:40.851 "num_blocks": 65536, 00:10:40.851 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:40.851 "assigned_rate_limits": { 00:10:40.851 "rw_ios_per_sec": 0, 00:10:40.851 "rw_mbytes_per_sec": 0, 00:10:40.851 "r_mbytes_per_sec": 0, 00:10:40.851 "w_mbytes_per_sec": 0 00:10:40.851 }, 00:10:40.851 "claimed": true, 00:10:40.851 "claim_type": "exclusive_write", 00:10:40.851 "zoned": false, 00:10:40.851 "supported_io_types": { 00:10:40.851 "read": true, 00:10:40.851 "write": true, 00:10:40.851 "unmap": true, 00:10:40.851 "flush": true, 00:10:40.851 "reset": true, 00:10:40.851 "nvme_admin": false, 00:10:40.851 "nvme_io": false, 00:10:40.851 "nvme_io_md": false, 00:10:40.851 "write_zeroes": true, 00:10:40.851 "zcopy": true, 00:10:40.851 "get_zone_info": false, 00:10:40.851 "zone_management": false, 00:10:40.851 "zone_append": false, 00:10:40.851 "compare": false, 00:10:40.851 "compare_and_write": false, 00:10:40.851 "abort": true, 00:10:40.851 "seek_hole": false, 00:10:40.851 "seek_data": false, 00:10:40.851 "copy": true, 00:10:40.851 "nvme_iov_md": false 00:10:40.851 }, 00:10:40.851 "memory_domains": [ 00:10:40.851 { 00:10:40.851 "dma_device_id": "system", 00:10:40.851 "dma_device_type": 1 00:10:40.851 }, 00:10:40.851 { 00:10:40.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.851 "dma_device_type": 2 00:10:40.851 } 00:10:40.851 ], 00:10:40.851 "driver_specific": {} 00:10:40.851 } 00:10:40.851 ] 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.851 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.111 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.111 "name": "Existed_Raid", 00:10:41.111 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:41.111 "strip_size_kb": 0, 00:10:41.111 "state": "online", 00:10:41.111 "raid_level": "raid1", 00:10:41.111 "superblock": true, 00:10:41.111 "num_base_bdevs": 3, 00:10:41.111 "num_base_bdevs_discovered": 3, 00:10:41.111 "num_base_bdevs_operational": 3, 00:10:41.111 "base_bdevs_list": [ 00:10:41.111 { 00:10:41.111 "name": "NewBaseBdev", 00:10:41.111 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:41.111 "is_configured": true, 00:10:41.111 "data_offset": 2048, 00:10:41.111 "data_size": 63488 00:10:41.111 }, 00:10:41.111 { 00:10:41.111 "name": "BaseBdev2", 00:10:41.111 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:41.111 "is_configured": true, 00:10:41.111 "data_offset": 2048, 00:10:41.111 "data_size": 63488 00:10:41.111 }, 00:10:41.111 { 00:10:41.111 "name": "BaseBdev3", 00:10:41.111 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:41.111 "is_configured": true, 00:10:41.111 "data_offset": 2048, 00:10:41.111 "data_size": 63488 00:10:41.111 } 00:10:41.111 ] 00:10:41.111 }' 00:10:41.111 02:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.111 02:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:41.370 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:41.630 [2024-07-25 02:35:28.296830] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:41.630 "name": "Existed_Raid", 00:10:41.630 "aliases": [ 00:10:41.630 "89d4444b-4a2e-11ef-9c8e-7947904e2597" 00:10:41.630 ], 00:10:41.630 "product_name": "Raid Volume", 00:10:41.630 "block_size": 512, 00:10:41.630 "num_blocks": 63488, 00:10:41.630 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "assigned_rate_limits": { 00:10:41.630 "rw_ios_per_sec": 0, 00:10:41.630 "rw_mbytes_per_sec": 0, 00:10:41.630 "r_mbytes_per_sec": 0, 00:10:41.630 "w_mbytes_per_sec": 0 00:10:41.630 }, 00:10:41.630 "claimed": false, 00:10:41.630 "zoned": false, 00:10:41.630 "supported_io_types": { 00:10:41.630 "read": true, 00:10:41.630 "write": true, 00:10:41.630 "unmap": false, 00:10:41.630 "flush": false, 00:10:41.630 "reset": true, 00:10:41.630 "nvme_admin": false, 00:10:41.630 "nvme_io": false, 00:10:41.630 "nvme_io_md": false, 00:10:41.630 "write_zeroes": true, 00:10:41.630 "zcopy": false, 00:10:41.630 "get_zone_info": false, 00:10:41.630 "zone_management": false, 00:10:41.630 "zone_append": false, 00:10:41.630 "compare": false, 00:10:41.630 "compare_and_write": false, 00:10:41.630 "abort": false, 00:10:41.630 "seek_hole": false, 00:10:41.630 "seek_data": false, 00:10:41.630 "copy": false, 00:10:41.630 "nvme_iov_md": false 00:10:41.630 }, 00:10:41.630 "memory_domains": [ 00:10:41.630 { 00:10:41.630 "dma_device_id": "system", 00:10:41.630 "dma_device_type": 1 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.630 "dma_device_type": 2 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "system", 00:10:41.630 "dma_device_type": 1 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.630 "dma_device_type": 2 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "system", 00:10:41.630 "dma_device_type": 1 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.630 "dma_device_type": 2 00:10:41.630 } 00:10:41.630 ], 00:10:41.630 "driver_specific": { 00:10:41.630 "raid": { 00:10:41.630 "uuid": "89d4444b-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "strip_size_kb": 0, 00:10:41.630 "state": "online", 00:10:41.630 "raid_level": "raid1", 00:10:41.630 "superblock": true, 00:10:41.630 "num_base_bdevs": 3, 00:10:41.630 "num_base_bdevs_discovered": 3, 00:10:41.630 "num_base_bdevs_operational": 3, 00:10:41.630 "base_bdevs_list": [ 00:10:41.630 { 00:10:41.630 "name": "NewBaseBdev", 00:10:41.630 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "is_configured": true, 00:10:41.630 "data_offset": 2048, 00:10:41.630 "data_size": 63488 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "name": "BaseBdev2", 00:10:41.630 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "is_configured": true, 00:10:41.630 "data_offset": 2048, 00:10:41.630 "data_size": 63488 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "name": "BaseBdev3", 00:10:41.630 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "is_configured": true, 00:10:41.630 "data_offset": 2048, 00:10:41.630 "data_size": 63488 00:10:41.630 } 00:10:41.630 ] 00:10:41.630 } 00:10:41.630 } 00:10:41.630 }' 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:41.630 BaseBdev2 00:10:41.630 BaseBdev3' 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:41.630 "name": "NewBaseBdev", 00:10:41.630 "aliases": [ 00:10:41.630 "8aae2e87-4a2e-11ef-9c8e-7947904e2597" 00:10:41.630 ], 00:10:41.630 "product_name": "Malloc disk", 00:10:41.630 "block_size": 512, 00:10:41.630 "num_blocks": 65536, 00:10:41.630 "uuid": "8aae2e87-4a2e-11ef-9c8e-7947904e2597", 00:10:41.630 "assigned_rate_limits": { 00:10:41.630 "rw_ios_per_sec": 0, 00:10:41.630 "rw_mbytes_per_sec": 0, 00:10:41.630 "r_mbytes_per_sec": 0, 00:10:41.630 "w_mbytes_per_sec": 0 00:10:41.630 }, 00:10:41.630 "claimed": true, 00:10:41.630 "claim_type": "exclusive_write", 00:10:41.630 "zoned": false, 00:10:41.630 "supported_io_types": { 00:10:41.630 "read": true, 00:10:41.630 "write": true, 00:10:41.630 "unmap": true, 00:10:41.630 "flush": true, 00:10:41.630 "reset": true, 00:10:41.630 "nvme_admin": false, 00:10:41.630 "nvme_io": false, 00:10:41.630 "nvme_io_md": false, 00:10:41.630 "write_zeroes": true, 00:10:41.630 "zcopy": true, 00:10:41.630 "get_zone_info": false, 00:10:41.630 "zone_management": false, 00:10:41.630 "zone_append": false, 00:10:41.630 "compare": false, 00:10:41.630 "compare_and_write": false, 00:10:41.630 "abort": true, 00:10:41.630 "seek_hole": false, 00:10:41.630 "seek_data": false, 00:10:41.630 "copy": true, 00:10:41.630 "nvme_iov_md": false 00:10:41.630 }, 00:10:41.630 "memory_domains": [ 00:10:41.630 { 00:10:41.630 "dma_device_id": "system", 00:10:41.630 "dma_device_type": 1 00:10:41.630 }, 00:10:41.630 { 00:10:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.630 "dma_device_type": 2 00:10:41.630 } 00:10:41.630 ], 00:10:41.630 "driver_specific": {} 00:10:41.630 }' 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.630 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:41.890 "name": "BaseBdev2", 00:10:41.890 "aliases": [ 00:10:41.890 "893011cb-4a2e-11ef-9c8e-7947904e2597" 00:10:41.890 ], 00:10:41.890 "product_name": "Malloc disk", 00:10:41.890 "block_size": 512, 00:10:41.890 "num_blocks": 65536, 00:10:41.890 "uuid": "893011cb-4a2e-11ef-9c8e-7947904e2597", 00:10:41.890 "assigned_rate_limits": { 00:10:41.890 "rw_ios_per_sec": 0, 00:10:41.890 "rw_mbytes_per_sec": 0, 00:10:41.890 "r_mbytes_per_sec": 0, 00:10:41.890 "w_mbytes_per_sec": 0 00:10:41.890 }, 00:10:41.890 "claimed": true, 00:10:41.890 "claim_type": "exclusive_write", 00:10:41.890 "zoned": false, 00:10:41.890 "supported_io_types": { 00:10:41.890 "read": true, 00:10:41.890 "write": true, 00:10:41.890 "unmap": true, 00:10:41.890 "flush": true, 00:10:41.890 "reset": true, 00:10:41.890 "nvme_admin": false, 00:10:41.890 "nvme_io": false, 00:10:41.890 "nvme_io_md": false, 00:10:41.890 "write_zeroes": true, 00:10:41.890 "zcopy": true, 00:10:41.890 "get_zone_info": false, 00:10:41.890 "zone_management": false, 00:10:41.890 "zone_append": false, 00:10:41.890 "compare": false, 00:10:41.890 "compare_and_write": false, 00:10:41.890 "abort": true, 00:10:41.890 "seek_hole": false, 00:10:41.890 "seek_data": false, 00:10:41.890 "copy": true, 00:10:41.890 "nvme_iov_md": false 00:10:41.890 }, 00:10:41.890 "memory_domains": [ 00:10:41.890 { 00:10:41.890 "dma_device_id": "system", 00:10:41.890 "dma_device_type": 1 00:10:41.890 }, 00:10:41.890 { 00:10:41.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.890 "dma_device_type": 2 00:10:41.890 } 00:10:41.890 ], 00:10:41.890 "driver_specific": {} 00:10:41.890 }' 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.890 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:42.149 02:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:42.149 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:42.149 "name": "BaseBdev3", 00:10:42.149 "aliases": [ 00:10:42.149 "89827943-4a2e-11ef-9c8e-7947904e2597" 00:10:42.149 ], 00:10:42.149 "product_name": "Malloc disk", 00:10:42.149 "block_size": 512, 00:10:42.149 "num_blocks": 65536, 00:10:42.149 "uuid": "89827943-4a2e-11ef-9c8e-7947904e2597", 00:10:42.149 "assigned_rate_limits": { 00:10:42.149 "rw_ios_per_sec": 0, 00:10:42.149 "rw_mbytes_per_sec": 0, 00:10:42.149 "r_mbytes_per_sec": 0, 00:10:42.149 "w_mbytes_per_sec": 0 00:10:42.149 }, 00:10:42.149 "claimed": true, 00:10:42.149 "claim_type": "exclusive_write", 00:10:42.149 "zoned": false, 00:10:42.149 "supported_io_types": { 00:10:42.149 "read": true, 00:10:42.149 "write": true, 00:10:42.149 "unmap": true, 00:10:42.149 "flush": true, 00:10:42.149 "reset": true, 00:10:42.149 "nvme_admin": false, 00:10:42.149 "nvme_io": false, 00:10:42.149 "nvme_io_md": false, 00:10:42.149 "write_zeroes": true, 00:10:42.149 "zcopy": true, 00:10:42.149 "get_zone_info": false, 00:10:42.149 "zone_management": false, 00:10:42.149 "zone_append": false, 00:10:42.149 "compare": false, 00:10:42.149 "compare_and_write": false, 00:10:42.149 "abort": true, 00:10:42.149 "seek_hole": false, 00:10:42.149 "seek_data": false, 00:10:42.149 "copy": true, 00:10:42.149 "nvme_iov_md": false 00:10:42.149 }, 00:10:42.149 "memory_domains": [ 00:10:42.149 { 00:10:42.149 "dma_device_id": "system", 00:10:42.149 "dma_device_type": 1 00:10:42.149 }, 00:10:42.149 { 00:10:42.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.149 "dma_device_type": 2 00:10:42.149 } 00:10:42.149 ], 00:10:42.149 "driver_specific": {} 00:10:42.149 }' 00:10:42.149 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:42.408 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:42.409 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:42.409 [2024-07-25 02:35:29.316914] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.409 [2024-07-25 02:35:29.316928] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.409 [2024-07-25 02:35:29.316942] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.409 [2024-07-25 02:35:29.317003] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.409 [2024-07-25 02:35:29.317006] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341f9c834f00 name Existed_Raid, state offline 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56575 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56575 ']' 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56575 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56575 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:42.668 killing process with pid 56575 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56575' 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56575 00:10:42.668 [2024-07-25 02:35:29.345032] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56575 00:10:42.668 [2024-07-25 02:35:29.358871] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:42.668 00:10:42.668 real 0m17.670s 00:10:42.668 user 0m31.960s 00:10:42.668 sys 0m2.774s 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.668 02:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.668 ************************************ 00:10:42.668 END TEST raid_state_function_test_sb 00:10:42.668 ************************************ 00:10:42.668 02:35:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:42.668 02:35:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:42.668 02:35:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:42.668 02:35:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.668 02:35:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.927 ************************************ 00:10:42.927 START TEST raid_superblock_test 00:10:42.927 ************************************ 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57279 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57279 /var/tmp/spdk-raid.sock 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57279 ']' 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.927 02:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.927 [2024-07-25 02:35:29.600770] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:10:42.927 [2024-07-25 02:35:29.601021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:43.186 EAL: TSC is not safe to use in SMP mode 00:10:43.186 EAL: TSC is not invariant 00:10:43.186 [2024-07-25 02:35:30.015206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.445 [2024-07-25 02:35:30.107468] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:43.445 [2024-07-25 02:35:30.109134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.445 [2024-07-25 02:35:30.109750] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.445 [2024-07-25 02:35:30.109760] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.705 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:43.964 malloc1 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.964 [2024-07-25 02:35:30.808829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.964 [2024-07-25 02:35:30.808868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.964 [2024-07-25 02:35:30.808891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434780 00:10:43.964 [2024-07-25 02:35:30.808896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.964 [2024-07-25 02:35:30.809582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.964 [2024-07-25 02:35:30.809607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.964 pt1 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.964 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:44.224 malloc2 00:10:44.224 02:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.483 [2024-07-25 02:35:31.148856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.483 [2024-07-25 02:35:31.148894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.483 [2024-07-25 02:35:31.148900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434c80 00:10:44.483 [2024-07-25 02:35:31.148905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.483 [2024-07-25 02:35:31.149366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.483 [2024-07-25 02:35:31.149393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.483 pt2 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:44.483 malloc3 00:10:44.483 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.743 [2024-07-25 02:35:31.512888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.743 [2024-07-25 02:35:31.512921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.743 [2024-07-25 02:35:31.512929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a435180 00:10:44.743 [2024-07-25 02:35:31.512934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.743 [2024-07-25 02:35:31.513383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.743 [2024-07-25 02:35:31.513408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.743 pt3 00:10:44.743 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:44.743 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:44.743 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:45.002 [2024-07-25 02:35:31.696904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.002 [2024-07-25 02:35:31.697307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.002 [2024-07-25 02:35:31.697328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.002 [2024-07-25 02:35:31.697374] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c75a435400 00:10:45.002 [2024-07-25 02:35:31.697379] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.002 [2024-07-25 02:35:31.697405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75a497e20 00:10:45.002 [2024-07-25 02:35:31.697465] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c75a435400 00:10:45.002 [2024-07-25 02:35:31.697468] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c75a435400 00:10:45.002 [2024-07-25 02:35:31.697503] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.002 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.003 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.003 "name": "raid_bdev1", 00:10:45.003 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:45.003 "strip_size_kb": 0, 00:10:45.003 "state": "online", 00:10:45.003 "raid_level": "raid1", 00:10:45.003 "superblock": true, 00:10:45.003 "num_base_bdevs": 3, 00:10:45.003 "num_base_bdevs_discovered": 3, 00:10:45.003 "num_base_bdevs_operational": 3, 00:10:45.003 "base_bdevs_list": [ 00:10:45.003 { 00:10:45.003 "name": "pt1", 00:10:45.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.003 "is_configured": true, 00:10:45.003 "data_offset": 2048, 00:10:45.003 "data_size": 63488 00:10:45.003 }, 00:10:45.003 { 00:10:45.003 "name": "pt2", 00:10:45.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.003 "is_configured": true, 00:10:45.003 "data_offset": 2048, 00:10:45.003 "data_size": 63488 00:10:45.003 }, 00:10:45.003 { 00:10:45.003 "name": "pt3", 00:10:45.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.003 "is_configured": true, 00:10:45.003 "data_offset": 2048, 00:10:45.003 "data_size": 63488 00:10:45.003 } 00:10:45.003 ] 00:10:45.003 }' 00:10:45.003 02:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.003 02:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:45.262 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:45.521 [2024-07-25 02:35:32.320970] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.521 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:45.521 "name": "raid_bdev1", 00:10:45.521 "aliases": [ 00:10:45.521 "9004f22e-4a2e-11ef-9c8e-7947904e2597" 00:10:45.521 ], 00:10:45.521 "product_name": "Raid Volume", 00:10:45.521 "block_size": 512, 00:10:45.521 "num_blocks": 63488, 00:10:45.521 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:45.521 "assigned_rate_limits": { 00:10:45.521 "rw_ios_per_sec": 0, 00:10:45.521 "rw_mbytes_per_sec": 0, 00:10:45.521 "r_mbytes_per_sec": 0, 00:10:45.521 "w_mbytes_per_sec": 0 00:10:45.521 }, 00:10:45.521 "claimed": false, 00:10:45.521 "zoned": false, 00:10:45.521 "supported_io_types": { 00:10:45.521 "read": true, 00:10:45.521 "write": true, 00:10:45.521 "unmap": false, 00:10:45.521 "flush": false, 00:10:45.521 "reset": true, 00:10:45.521 "nvme_admin": false, 00:10:45.521 "nvme_io": false, 00:10:45.521 "nvme_io_md": false, 00:10:45.521 "write_zeroes": true, 00:10:45.521 "zcopy": false, 00:10:45.521 "get_zone_info": false, 00:10:45.521 "zone_management": false, 00:10:45.521 "zone_append": false, 00:10:45.521 "compare": false, 00:10:45.521 "compare_and_write": false, 00:10:45.521 "abort": false, 00:10:45.521 "seek_hole": false, 00:10:45.521 "seek_data": false, 00:10:45.521 "copy": false, 00:10:45.521 "nvme_iov_md": false 00:10:45.521 }, 00:10:45.521 "memory_domains": [ 00:10:45.521 { 00:10:45.521 "dma_device_id": "system", 00:10:45.521 "dma_device_type": 1 00:10:45.521 }, 00:10:45.521 { 00:10:45.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.521 "dma_device_type": 2 00:10:45.521 }, 00:10:45.521 { 00:10:45.521 "dma_device_id": "system", 00:10:45.521 "dma_device_type": 1 00:10:45.521 }, 00:10:45.521 { 00:10:45.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.521 "dma_device_type": 2 00:10:45.521 }, 00:10:45.521 { 00:10:45.521 "dma_device_id": "system", 00:10:45.521 "dma_device_type": 1 00:10:45.521 }, 00:10:45.521 { 00:10:45.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.521 "dma_device_type": 2 00:10:45.521 } 00:10:45.521 ], 00:10:45.521 "driver_specific": { 00:10:45.521 "raid": { 00:10:45.521 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:45.521 "strip_size_kb": 0, 00:10:45.521 "state": "online", 00:10:45.522 "raid_level": "raid1", 00:10:45.522 "superblock": true, 00:10:45.522 "num_base_bdevs": 3, 00:10:45.522 "num_base_bdevs_discovered": 3, 00:10:45.522 "num_base_bdevs_operational": 3, 00:10:45.522 "base_bdevs_list": [ 00:10:45.522 { 00:10:45.522 "name": "pt1", 00:10:45.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.522 "is_configured": true, 00:10:45.522 "data_offset": 2048, 00:10:45.522 "data_size": 63488 00:10:45.522 }, 00:10:45.522 { 00:10:45.522 "name": "pt2", 00:10:45.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.522 "is_configured": true, 00:10:45.522 "data_offset": 2048, 00:10:45.522 "data_size": 63488 00:10:45.522 }, 00:10:45.522 { 00:10:45.522 "name": "pt3", 00:10:45.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.522 "is_configured": true, 00:10:45.522 "data_offset": 2048, 00:10:45.522 "data_size": 63488 00:10:45.522 } 00:10:45.522 ] 00:10:45.522 } 00:10:45.522 } 00:10:45.522 }' 00:10:45.522 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.522 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:45.522 pt2 00:10:45.522 pt3' 00:10:45.522 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:45.522 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:45.522 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:45.781 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:45.781 "name": "pt1", 00:10:45.781 "aliases": [ 00:10:45.781 "00000000-0000-0000-0000-000000000001" 00:10:45.781 ], 00:10:45.781 "product_name": "passthru", 00:10:45.781 "block_size": 512, 00:10:45.781 "num_blocks": 65536, 00:10:45.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.781 "assigned_rate_limits": { 00:10:45.781 "rw_ios_per_sec": 0, 00:10:45.781 "rw_mbytes_per_sec": 0, 00:10:45.781 "r_mbytes_per_sec": 0, 00:10:45.781 "w_mbytes_per_sec": 0 00:10:45.781 }, 00:10:45.781 "claimed": true, 00:10:45.781 "claim_type": "exclusive_write", 00:10:45.781 "zoned": false, 00:10:45.781 "supported_io_types": { 00:10:45.781 "read": true, 00:10:45.781 "write": true, 00:10:45.781 "unmap": true, 00:10:45.781 "flush": true, 00:10:45.781 "reset": true, 00:10:45.781 "nvme_admin": false, 00:10:45.781 "nvme_io": false, 00:10:45.781 "nvme_io_md": false, 00:10:45.781 "write_zeroes": true, 00:10:45.781 "zcopy": true, 00:10:45.781 "get_zone_info": false, 00:10:45.782 "zone_management": false, 00:10:45.782 "zone_append": false, 00:10:45.782 "compare": false, 00:10:45.782 "compare_and_write": false, 00:10:45.782 "abort": true, 00:10:45.782 "seek_hole": false, 00:10:45.782 "seek_data": false, 00:10:45.782 "copy": true, 00:10:45.782 "nvme_iov_md": false 00:10:45.782 }, 00:10:45.782 "memory_domains": [ 00:10:45.782 { 00:10:45.782 "dma_device_id": "system", 00:10:45.782 "dma_device_type": 1 00:10:45.782 }, 00:10:45.782 { 00:10:45.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.782 "dma_device_type": 2 00:10:45.782 } 00:10:45.782 ], 00:10:45.782 "driver_specific": { 00:10:45.782 "passthru": { 00:10:45.782 "name": "pt1", 00:10:45.782 "base_bdev_name": "malloc1" 00:10:45.782 } 00:10:45.782 } 00:10:45.782 }' 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:45.782 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:46.041 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:46.042 "name": "pt2", 00:10:46.042 "aliases": [ 00:10:46.042 "00000000-0000-0000-0000-000000000002" 00:10:46.042 ], 00:10:46.042 "product_name": "passthru", 00:10:46.042 "block_size": 512, 00:10:46.042 "num_blocks": 65536, 00:10:46.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.042 "assigned_rate_limits": { 00:10:46.042 "rw_ios_per_sec": 0, 00:10:46.042 "rw_mbytes_per_sec": 0, 00:10:46.042 "r_mbytes_per_sec": 0, 00:10:46.042 "w_mbytes_per_sec": 0 00:10:46.042 }, 00:10:46.042 "claimed": true, 00:10:46.042 "claim_type": "exclusive_write", 00:10:46.042 "zoned": false, 00:10:46.042 "supported_io_types": { 00:10:46.042 "read": true, 00:10:46.042 "write": true, 00:10:46.042 "unmap": true, 00:10:46.042 "flush": true, 00:10:46.042 "reset": true, 00:10:46.042 "nvme_admin": false, 00:10:46.042 "nvme_io": false, 00:10:46.042 "nvme_io_md": false, 00:10:46.042 "write_zeroes": true, 00:10:46.042 "zcopy": true, 00:10:46.042 "get_zone_info": false, 00:10:46.042 "zone_management": false, 00:10:46.042 "zone_append": false, 00:10:46.042 "compare": false, 00:10:46.042 "compare_and_write": false, 00:10:46.042 "abort": true, 00:10:46.042 "seek_hole": false, 00:10:46.042 "seek_data": false, 00:10:46.042 "copy": true, 00:10:46.042 "nvme_iov_md": false 00:10:46.042 }, 00:10:46.042 "memory_domains": [ 00:10:46.042 { 00:10:46.042 "dma_device_id": "system", 00:10:46.042 "dma_device_type": 1 00:10:46.042 }, 00:10:46.042 { 00:10:46.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.042 "dma_device_type": 2 00:10:46.042 } 00:10:46.042 ], 00:10:46.042 "driver_specific": { 00:10:46.042 "passthru": { 00:10:46.042 "name": "pt2", 00:10:46.042 "base_bdev_name": "malloc2" 00:10:46.042 } 00:10:46.042 } 00:10:46.042 }' 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:46.042 02:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:46.301 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:46.301 "name": "pt3", 00:10:46.301 "aliases": [ 00:10:46.301 "00000000-0000-0000-0000-000000000003" 00:10:46.301 ], 00:10:46.301 "product_name": "passthru", 00:10:46.301 "block_size": 512, 00:10:46.301 "num_blocks": 65536, 00:10:46.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.301 "assigned_rate_limits": { 00:10:46.301 "rw_ios_per_sec": 0, 00:10:46.301 "rw_mbytes_per_sec": 0, 00:10:46.301 "r_mbytes_per_sec": 0, 00:10:46.301 "w_mbytes_per_sec": 0 00:10:46.301 }, 00:10:46.301 "claimed": true, 00:10:46.301 "claim_type": "exclusive_write", 00:10:46.301 "zoned": false, 00:10:46.301 "supported_io_types": { 00:10:46.301 "read": true, 00:10:46.301 "write": true, 00:10:46.301 "unmap": true, 00:10:46.301 "flush": true, 00:10:46.301 "reset": true, 00:10:46.301 "nvme_admin": false, 00:10:46.301 "nvme_io": false, 00:10:46.301 "nvme_io_md": false, 00:10:46.301 "write_zeroes": true, 00:10:46.301 "zcopy": true, 00:10:46.301 "get_zone_info": false, 00:10:46.301 "zone_management": false, 00:10:46.301 "zone_append": false, 00:10:46.301 "compare": false, 00:10:46.301 "compare_and_write": false, 00:10:46.301 "abort": true, 00:10:46.301 "seek_hole": false, 00:10:46.301 "seek_data": false, 00:10:46.301 "copy": true, 00:10:46.301 "nvme_iov_md": false 00:10:46.301 }, 00:10:46.301 "memory_domains": [ 00:10:46.301 { 00:10:46.301 "dma_device_id": "system", 00:10:46.301 "dma_device_type": 1 00:10:46.301 }, 00:10:46.301 { 00:10:46.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.301 "dma_device_type": 2 00:10:46.301 } 00:10:46.301 ], 00:10:46.301 "driver_specific": { 00:10:46.301 "passthru": { 00:10:46.302 "name": "pt3", 00:10:46.302 "base_bdev_name": "malloc3" 00:10:46.302 } 00:10:46.302 } 00:10:46.302 }' 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:46.302 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:46.564 [2024-07-25 02:35:33.325060] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.564 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9004f22e-4a2e-11ef-9c8e-7947904e2597 00:10:46.564 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9004f22e-4a2e-11ef-9c8e-7947904e2597 ']' 00:10:46.564 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:46.823 [2024-07-25 02:35:33.509055] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.823 [2024-07-25 02:35:33.509067] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.823 [2024-07-25 02:35:33.509080] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.823 [2024-07-25 02:35:33.509108] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.823 [2024-07-25 02:35:33.509111] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a435400 name raid_bdev1, state offline 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:46.823 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:47.081 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.081 02:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:47.339 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.339 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.599 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:47.859 [2024-07-25 02:35:34.613192] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:47.859 [2024-07-25 02:35:34.613635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:47.859 [2024-07-25 02:35:34.613652] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:47.859 [2024-07-25 02:35:34.613663] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:47.859 [2024-07-25 02:35:34.613690] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:47.859 [2024-07-25 02:35:34.613714] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:47.859 [2024-07-25 02:35:34.613720] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.859 [2024-07-25 02:35:34.613723] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a435180 name raid_bdev1, state configuring 00:10:47.859 request: 00:10:47.859 { 00:10:47.859 "name": "raid_bdev1", 00:10:47.859 "raid_level": "raid1", 00:10:47.859 "base_bdevs": [ 00:10:47.859 "malloc1", 00:10:47.859 "malloc2", 00:10:47.859 "malloc3" 00:10:47.859 ], 00:10:47.859 "superblock": false, 00:10:47.859 "method": "bdev_raid_create", 00:10:47.859 "req_id": 1 00:10:47.859 } 00:10:47.859 Got JSON-RPC error response 00:10:47.859 response: 00:10:47.859 { 00:10:47.859 "code": -17, 00:10:47.859 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:47.859 } 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.859 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.119 [2024-07-25 02:35:34.981224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.119 [2024-07-25 02:35:34.981252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.119 [2024-07-25 02:35:34.981276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434c80 00:10:48.119 [2024-07-25 02:35:34.981281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.119 [2024-07-25 02:35:34.981762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.119 [2024-07-25 02:35:34.981804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.119 [2024-07-25 02:35:34.981821] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:48.119 [2024-07-25 02:35:34.981831] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.119 pt1 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.119 02:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.379 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.379 "name": "raid_bdev1", 00:10:48.379 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:48.379 "strip_size_kb": 0, 00:10:48.379 "state": "configuring", 00:10:48.379 "raid_level": "raid1", 00:10:48.379 "superblock": true, 00:10:48.379 "num_base_bdevs": 3, 00:10:48.379 "num_base_bdevs_discovered": 1, 00:10:48.379 "num_base_bdevs_operational": 3, 00:10:48.379 "base_bdevs_list": [ 00:10:48.379 { 00:10:48.379 "name": "pt1", 00:10:48.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.379 "is_configured": true, 00:10:48.379 "data_offset": 2048, 00:10:48.379 "data_size": 63488 00:10:48.379 }, 00:10:48.379 { 00:10:48.379 "name": null, 00:10:48.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.379 "is_configured": false, 00:10:48.379 "data_offset": 2048, 00:10:48.379 "data_size": 63488 00:10:48.379 }, 00:10:48.379 { 00:10:48.379 "name": null, 00:10:48.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.379 "is_configured": false, 00:10:48.379 "data_offset": 2048, 00:10:48.379 "data_size": 63488 00:10:48.379 } 00:10:48.379 ] 00:10:48.379 }' 00:10:48.379 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.379 02:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.639 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:10:48.639 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.899 [2024-07-25 02:35:35.593278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.899 [2024-07-25 02:35:35.593307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.899 [2024-07-25 02:35:35.593314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a435680 00:10:48.899 [2024-07-25 02:35:35.593319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.899 [2024-07-25 02:35:35.593410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.899 [2024-07-25 02:35:35.593416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.899 [2024-07-25 02:35:35.593445] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.899 [2024-07-25 02:35:35.593452] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.899 pt2 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:48.899 [2024-07-25 02:35:35.773292] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.899 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.158 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.158 "name": "raid_bdev1", 00:10:49.158 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:49.158 "strip_size_kb": 0, 00:10:49.158 "state": "configuring", 00:10:49.158 "raid_level": "raid1", 00:10:49.158 "superblock": true, 00:10:49.158 "num_base_bdevs": 3, 00:10:49.158 "num_base_bdevs_discovered": 1, 00:10:49.158 "num_base_bdevs_operational": 3, 00:10:49.158 "base_bdevs_list": [ 00:10:49.158 { 00:10:49.158 "name": "pt1", 00:10:49.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.158 "is_configured": true, 00:10:49.158 "data_offset": 2048, 00:10:49.158 "data_size": 63488 00:10:49.158 }, 00:10:49.158 { 00:10:49.158 "name": null, 00:10:49.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.158 "is_configured": false, 00:10:49.158 "data_offset": 2048, 00:10:49.158 "data_size": 63488 00:10:49.158 }, 00:10:49.158 { 00:10:49.158 "name": null, 00:10:49.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.158 "is_configured": false, 00:10:49.158 "data_offset": 2048, 00:10:49.158 "data_size": 63488 00:10:49.158 } 00:10:49.158 ] 00:10:49.158 }' 00:10:49.158 02:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.158 02:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.418 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:49.418 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:49.418 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.677 [2024-07-25 02:35:36.389346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.677 [2024-07-25 02:35:36.389374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.677 [2024-07-25 02:35:36.389401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a435680 00:10:49.677 [2024-07-25 02:35:36.389407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.677 [2024-07-25 02:35:36.389492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.677 [2024-07-25 02:35:36.389498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.677 [2024-07-25 02:35:36.389522] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:49.677 [2024-07-25 02:35:36.389527] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.677 pt2 00:10:49.677 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:49.677 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:49.677 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.677 [2024-07-25 02:35:36.549362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.677 [2024-07-25 02:35:36.549392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.678 [2024-07-25 02:35:36.549414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a435400 00:10:49.678 [2024-07-25 02:35:36.549436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.678 [2024-07-25 02:35:36.549496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.678 [2024-07-25 02:35:36.549502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.678 [2024-07-25 02:35:36.549515] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.678 [2024-07-25 02:35:36.549521] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.678 [2024-07-25 02:35:36.549539] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c75a434780 00:10:49.678 [2024-07-25 02:35:36.549542] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.678 [2024-07-25 02:35:36.549558] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75a497e20 00:10:49.678 [2024-07-25 02:35:36.549593] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c75a434780 00:10:49.678 [2024-07-25 02:35:36.549596] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c75a434780 00:10:49.678 [2024-07-25 02:35:36.549611] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.678 pt3 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.678 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.938 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.938 "name": "raid_bdev1", 00:10:49.938 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:49.938 "strip_size_kb": 0, 00:10:49.938 "state": "online", 00:10:49.938 "raid_level": "raid1", 00:10:49.938 "superblock": true, 00:10:49.938 "num_base_bdevs": 3, 00:10:49.938 "num_base_bdevs_discovered": 3, 00:10:49.938 "num_base_bdevs_operational": 3, 00:10:49.938 "base_bdevs_list": [ 00:10:49.938 { 00:10:49.938 "name": "pt1", 00:10:49.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.938 "is_configured": true, 00:10:49.938 "data_offset": 2048, 00:10:49.938 "data_size": 63488 00:10:49.938 }, 00:10:49.938 { 00:10:49.938 "name": "pt2", 00:10:49.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.938 "is_configured": true, 00:10:49.938 "data_offset": 2048, 00:10:49.938 "data_size": 63488 00:10:49.938 }, 00:10:49.938 { 00:10:49.938 "name": "pt3", 00:10:49.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.938 "is_configured": true, 00:10:49.938 "data_offset": 2048, 00:10:49.938 "data_size": 63488 00:10:49.938 } 00:10:49.938 ] 00:10:49.938 }' 00:10:49.938 02:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.938 02:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:50.198 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:50.458 [2024-07-25 02:35:37.177459] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:50.458 "name": "raid_bdev1", 00:10:50.458 "aliases": [ 00:10:50.458 "9004f22e-4a2e-11ef-9c8e-7947904e2597" 00:10:50.458 ], 00:10:50.458 "product_name": "Raid Volume", 00:10:50.458 "block_size": 512, 00:10:50.458 "num_blocks": 63488, 00:10:50.458 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:50.458 "assigned_rate_limits": { 00:10:50.458 "rw_ios_per_sec": 0, 00:10:50.458 "rw_mbytes_per_sec": 0, 00:10:50.458 "r_mbytes_per_sec": 0, 00:10:50.458 "w_mbytes_per_sec": 0 00:10:50.458 }, 00:10:50.458 "claimed": false, 00:10:50.458 "zoned": false, 00:10:50.458 "supported_io_types": { 00:10:50.458 "read": true, 00:10:50.458 "write": true, 00:10:50.458 "unmap": false, 00:10:50.458 "flush": false, 00:10:50.458 "reset": true, 00:10:50.458 "nvme_admin": false, 00:10:50.458 "nvme_io": false, 00:10:50.458 "nvme_io_md": false, 00:10:50.458 "write_zeroes": true, 00:10:50.458 "zcopy": false, 00:10:50.458 "get_zone_info": false, 00:10:50.458 "zone_management": false, 00:10:50.458 "zone_append": false, 00:10:50.458 "compare": false, 00:10:50.458 "compare_and_write": false, 00:10:50.458 "abort": false, 00:10:50.458 "seek_hole": false, 00:10:50.458 "seek_data": false, 00:10:50.458 "copy": false, 00:10:50.458 "nvme_iov_md": false 00:10:50.458 }, 00:10:50.458 "memory_domains": [ 00:10:50.458 { 00:10:50.458 "dma_device_id": "system", 00:10:50.458 "dma_device_type": 1 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.458 "dma_device_type": 2 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "dma_device_id": "system", 00:10:50.458 "dma_device_type": 1 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.458 "dma_device_type": 2 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "dma_device_id": "system", 00:10:50.458 "dma_device_type": 1 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.458 "dma_device_type": 2 00:10:50.458 } 00:10:50.458 ], 00:10:50.458 "driver_specific": { 00:10:50.458 "raid": { 00:10:50.458 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:50.458 "strip_size_kb": 0, 00:10:50.458 "state": "online", 00:10:50.458 "raid_level": "raid1", 00:10:50.458 "superblock": true, 00:10:50.458 "num_base_bdevs": 3, 00:10:50.458 "num_base_bdevs_discovered": 3, 00:10:50.458 "num_base_bdevs_operational": 3, 00:10:50.458 "base_bdevs_list": [ 00:10:50.458 { 00:10:50.458 "name": "pt1", 00:10:50.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.458 "is_configured": true, 00:10:50.458 "data_offset": 2048, 00:10:50.458 "data_size": 63488 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "name": "pt2", 00:10:50.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.458 "is_configured": true, 00:10:50.458 "data_offset": 2048, 00:10:50.458 "data_size": 63488 00:10:50.458 }, 00:10:50.458 { 00:10:50.458 "name": "pt3", 00:10:50.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.458 "is_configured": true, 00:10:50.458 "data_offset": 2048, 00:10:50.458 "data_size": 63488 00:10:50.458 } 00:10:50.458 ] 00:10:50.458 } 00:10:50.458 } 00:10:50.458 }' 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:50.458 pt2 00:10:50.458 pt3' 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.458 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.458 "name": "pt1", 00:10:50.458 "aliases": [ 00:10:50.458 "00000000-0000-0000-0000-000000000001" 00:10:50.458 ], 00:10:50.458 "product_name": "passthru", 00:10:50.458 "block_size": 512, 00:10:50.458 "num_blocks": 65536, 00:10:50.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.459 "assigned_rate_limits": { 00:10:50.459 "rw_ios_per_sec": 0, 00:10:50.459 "rw_mbytes_per_sec": 0, 00:10:50.459 "r_mbytes_per_sec": 0, 00:10:50.459 "w_mbytes_per_sec": 0 00:10:50.459 }, 00:10:50.459 "claimed": true, 00:10:50.459 "claim_type": "exclusive_write", 00:10:50.459 "zoned": false, 00:10:50.459 "supported_io_types": { 00:10:50.459 "read": true, 00:10:50.459 "write": true, 00:10:50.459 "unmap": true, 00:10:50.459 "flush": true, 00:10:50.459 "reset": true, 00:10:50.459 "nvme_admin": false, 00:10:50.459 "nvme_io": false, 00:10:50.459 "nvme_io_md": false, 00:10:50.459 "write_zeroes": true, 00:10:50.459 "zcopy": true, 00:10:50.459 "get_zone_info": false, 00:10:50.459 "zone_management": false, 00:10:50.459 "zone_append": false, 00:10:50.459 "compare": false, 00:10:50.459 "compare_and_write": false, 00:10:50.459 "abort": true, 00:10:50.459 "seek_hole": false, 00:10:50.459 "seek_data": false, 00:10:50.459 "copy": true, 00:10:50.459 "nvme_iov_md": false 00:10:50.459 }, 00:10:50.459 "memory_domains": [ 00:10:50.459 { 00:10:50.459 "dma_device_id": "system", 00:10:50.459 "dma_device_type": 1 00:10:50.459 }, 00:10:50.459 { 00:10:50.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.459 "dma_device_type": 2 00:10:50.459 } 00:10:50.459 ], 00:10:50.459 "driver_specific": { 00:10:50.459 "passthru": { 00:10:50.459 "name": "pt1", 00:10:50.459 "base_bdev_name": "malloc1" 00:10:50.459 } 00:10:50.459 } 00:10:50.459 }' 00:10:50.459 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:50.719 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.979 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.979 "name": "pt2", 00:10:50.979 "aliases": [ 00:10:50.979 "00000000-0000-0000-0000-000000000002" 00:10:50.979 ], 00:10:50.979 "product_name": "passthru", 00:10:50.979 "block_size": 512, 00:10:50.979 "num_blocks": 65536, 00:10:50.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.979 "assigned_rate_limits": { 00:10:50.979 "rw_ios_per_sec": 0, 00:10:50.979 "rw_mbytes_per_sec": 0, 00:10:50.979 "r_mbytes_per_sec": 0, 00:10:50.979 "w_mbytes_per_sec": 0 00:10:50.979 }, 00:10:50.979 "claimed": true, 00:10:50.979 "claim_type": "exclusive_write", 00:10:50.979 "zoned": false, 00:10:50.979 "supported_io_types": { 00:10:50.979 "read": true, 00:10:50.979 "write": true, 00:10:50.979 "unmap": true, 00:10:50.979 "flush": true, 00:10:50.979 "reset": true, 00:10:50.979 "nvme_admin": false, 00:10:50.980 "nvme_io": false, 00:10:50.980 "nvme_io_md": false, 00:10:50.980 "write_zeroes": true, 00:10:50.980 "zcopy": true, 00:10:50.980 "get_zone_info": false, 00:10:50.980 "zone_management": false, 00:10:50.980 "zone_append": false, 00:10:50.980 "compare": false, 00:10:50.980 "compare_and_write": false, 00:10:50.980 "abort": true, 00:10:50.980 "seek_hole": false, 00:10:50.980 "seek_data": false, 00:10:50.980 "copy": true, 00:10:50.980 "nvme_iov_md": false 00:10:50.980 }, 00:10:50.980 "memory_domains": [ 00:10:50.980 { 00:10:50.980 "dma_device_id": "system", 00:10:50.980 "dma_device_type": 1 00:10:50.980 }, 00:10:50.980 { 00:10:50.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.980 "dma_device_type": 2 00:10:50.980 } 00:10:50.980 ], 00:10:50.980 "driver_specific": { 00:10:50.980 "passthru": { 00:10:50.980 "name": "pt2", 00:10:50.980 "base_bdev_name": "malloc2" 00:10:50.980 } 00:10:50.980 } 00:10:50.980 }' 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:50.980 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:51.240 "name": "pt3", 00:10:51.240 "aliases": [ 00:10:51.240 "00000000-0000-0000-0000-000000000003" 00:10:51.240 ], 00:10:51.240 "product_name": "passthru", 00:10:51.240 "block_size": 512, 00:10:51.240 "num_blocks": 65536, 00:10:51.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.240 "assigned_rate_limits": { 00:10:51.240 "rw_ios_per_sec": 0, 00:10:51.240 "rw_mbytes_per_sec": 0, 00:10:51.240 "r_mbytes_per_sec": 0, 00:10:51.240 "w_mbytes_per_sec": 0 00:10:51.240 }, 00:10:51.240 "claimed": true, 00:10:51.240 "claim_type": "exclusive_write", 00:10:51.240 "zoned": false, 00:10:51.240 "supported_io_types": { 00:10:51.240 "read": true, 00:10:51.240 "write": true, 00:10:51.240 "unmap": true, 00:10:51.240 "flush": true, 00:10:51.240 "reset": true, 00:10:51.240 "nvme_admin": false, 00:10:51.240 "nvme_io": false, 00:10:51.240 "nvme_io_md": false, 00:10:51.240 "write_zeroes": true, 00:10:51.240 "zcopy": true, 00:10:51.240 "get_zone_info": false, 00:10:51.240 "zone_management": false, 00:10:51.240 "zone_append": false, 00:10:51.240 "compare": false, 00:10:51.240 "compare_and_write": false, 00:10:51.240 "abort": true, 00:10:51.240 "seek_hole": false, 00:10:51.240 "seek_data": false, 00:10:51.240 "copy": true, 00:10:51.240 "nvme_iov_md": false 00:10:51.240 }, 00:10:51.240 "memory_domains": [ 00:10:51.240 { 00:10:51.240 "dma_device_id": "system", 00:10:51.240 "dma_device_type": 1 00:10:51.240 }, 00:10:51.240 { 00:10:51.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.240 "dma_device_type": 2 00:10:51.240 } 00:10:51.240 ], 00:10:51.240 "driver_specific": { 00:10:51.240 "passthru": { 00:10:51.240 "name": "pt3", 00:10:51.240 "base_bdev_name": "malloc3" 00:10:51.240 } 00:10:51.240 } 00:10:51.240 }' 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:51.240 02:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.240 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:51.240 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:51.240 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:51.240 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:51.503 [2024-07-25 02:35:38.177539] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9004f22e-4a2e-11ef-9c8e-7947904e2597 '!=' 9004f22e-4a2e-11ef-9c8e-7947904e2597 ']' 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:51.503 [2024-07-25 02:35:38.361539] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.503 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.777 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.777 "name": "raid_bdev1", 00:10:51.777 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:51.777 "strip_size_kb": 0, 00:10:51.777 "state": "online", 00:10:51.777 "raid_level": "raid1", 00:10:51.777 "superblock": true, 00:10:51.777 "num_base_bdevs": 3, 00:10:51.777 "num_base_bdevs_discovered": 2, 00:10:51.777 "num_base_bdevs_operational": 2, 00:10:51.777 "base_bdevs_list": [ 00:10:51.777 { 00:10:51.777 "name": null, 00:10:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.777 "is_configured": false, 00:10:51.777 "data_offset": 2048, 00:10:51.777 "data_size": 63488 00:10:51.777 }, 00:10:51.777 { 00:10:51.777 "name": "pt2", 00:10:51.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.777 "is_configured": true, 00:10:51.777 "data_offset": 2048, 00:10:51.777 "data_size": 63488 00:10:51.777 }, 00:10:51.777 { 00:10:51.777 "name": "pt3", 00:10:51.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.777 "is_configured": true, 00:10:51.777 "data_offset": 2048, 00:10:51.777 "data_size": 63488 00:10:51.777 } 00:10:51.777 ] 00:10:51.777 }' 00:10:51.777 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.777 02:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 02:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:52.294 [2024-07-25 02:35:38.993602] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.294 [2024-07-25 02:35:38.993618] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.294 [2024-07-25 02:35:38.993632] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.294 [2024-07-25 02:35:38.993643] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.294 [2024-07-25 02:35:38.993646] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a434780 name raid_bdev1, state offline 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:10:52.294 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:52.554 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:10:52.554 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:10:52.554 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.813 [2024-07-25 02:35:39.693675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.813 [2024-07-25 02:35:39.693708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.813 [2024-07-25 02:35:39.693731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a435400 00:10:52.813 [2024-07-25 02:35:39.693737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.813 [2024-07-25 02:35:39.694228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.813 [2024-07-25 02:35:39.694253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.813 [2024-07-25 02:35:39.694271] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.813 [2024-07-25 02:35:39.694280] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.813 pt2 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.813 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.072 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.072 "name": "raid_bdev1", 00:10:53.072 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:53.072 "strip_size_kb": 0, 00:10:53.072 "state": "configuring", 00:10:53.072 "raid_level": "raid1", 00:10:53.072 "superblock": true, 00:10:53.072 "num_base_bdevs": 3, 00:10:53.072 "num_base_bdevs_discovered": 1, 00:10:53.072 "num_base_bdevs_operational": 2, 00:10:53.072 "base_bdevs_list": [ 00:10:53.072 { 00:10:53.072 "name": null, 00:10:53.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.072 "is_configured": false, 00:10:53.072 "data_offset": 2048, 00:10:53.072 "data_size": 63488 00:10:53.072 }, 00:10:53.072 { 00:10:53.072 "name": "pt2", 00:10:53.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.072 "is_configured": true, 00:10:53.072 "data_offset": 2048, 00:10:53.072 "data_size": 63488 00:10:53.072 }, 00:10:53.072 { 00:10:53.072 "name": null, 00:10:53.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.072 "is_configured": false, 00:10:53.072 "data_offset": 2048, 00:10:53.072 "data_size": 63488 00:10:53.072 } 00:10:53.072 ] 00:10:53.072 }' 00:10:53.072 02:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.072 02:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.331 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:10:53.331 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:10:53.331 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:10:53.331 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:53.591 [2024-07-25 02:35:40.329732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:53.591 [2024-07-25 02:35:40.329763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.591 [2024-07-25 02:35:40.329786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434780 00:10:53.591 [2024-07-25 02:35:40.329791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.591 [2024-07-25 02:35:40.329861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.591 [2024-07-25 02:35:40.329868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:53.591 [2024-07-25 02:35:40.329883] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:53.591 [2024-07-25 02:35:40.329911] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.591 [2024-07-25 02:35:40.329929] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c75a435180 00:10:53.591 [2024-07-25 02:35:40.329932] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:53.591 [2024-07-25 02:35:40.329947] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75a497e20 00:10:53.591 [2024-07-25 02:35:40.329977] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c75a435180 00:10:53.591 [2024-07-25 02:35:40.329980] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c75a435180 00:10:53.591 [2024-07-25 02:35:40.329994] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.591 pt3 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.591 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.850 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.850 "name": "raid_bdev1", 00:10:53.850 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:53.850 "strip_size_kb": 0, 00:10:53.850 "state": "online", 00:10:53.850 "raid_level": "raid1", 00:10:53.850 "superblock": true, 00:10:53.850 "num_base_bdevs": 3, 00:10:53.850 "num_base_bdevs_discovered": 2, 00:10:53.850 "num_base_bdevs_operational": 2, 00:10:53.850 "base_bdevs_list": [ 00:10:53.850 { 00:10:53.850 "name": null, 00:10:53.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.850 "is_configured": false, 00:10:53.850 "data_offset": 2048, 00:10:53.850 "data_size": 63488 00:10:53.850 }, 00:10:53.850 { 00:10:53.850 "name": "pt2", 00:10:53.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.850 "is_configured": true, 00:10:53.850 "data_offset": 2048, 00:10:53.850 "data_size": 63488 00:10:53.850 }, 00:10:53.850 { 00:10:53.850 "name": "pt3", 00:10:53.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.850 "is_configured": true, 00:10:53.850 "data_offset": 2048, 00:10:53.851 "data_size": 63488 00:10:53.851 } 00:10:53.851 ] 00:10:53.851 }' 00:10:53.851 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.851 02:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.110 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:54.110 [2024-07-25 02:35:40.957777] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.110 [2024-07-25 02:35:40.957790] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.110 [2024-07-25 02:35:40.957809] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.110 [2024-07-25 02:35:40.957820] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.110 [2024-07-25 02:35:40.957823] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a435180 name raid_bdev1, state offline 00:10:54.110 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.110 02:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:10:54.370 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:10:54.370 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:10:54.370 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:10:54.370 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:10:54.370 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.630 [2024-07-25 02:35:41.473828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.630 [2024-07-25 02:35:41.473861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.630 [2024-07-25 02:35:41.473884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434780 00:10:54.630 [2024-07-25 02:35:41.473889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.630 [2024-07-25 02:35:41.474382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.630 [2024-07-25 02:35:41.474405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.630 [2024-07-25 02:35:41.474422] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:54.630 [2024-07-25 02:35:41.474430] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:54.630 [2024-07-25 02:35:41.474451] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:54.630 [2024-07-25 02:35:41.474454] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.630 [2024-07-25 02:35:41.474458] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a435180 name raid_bdev1, state configuring 00:10:54.630 [2024-07-25 02:35:41.474464] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:54.630 pt1 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.630 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.890 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.890 "name": "raid_bdev1", 00:10:54.890 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:54.890 "strip_size_kb": 0, 00:10:54.890 "state": "configuring", 00:10:54.890 "raid_level": "raid1", 00:10:54.890 "superblock": true, 00:10:54.890 "num_base_bdevs": 3, 00:10:54.890 "num_base_bdevs_discovered": 1, 00:10:54.890 "num_base_bdevs_operational": 2, 00:10:54.890 "base_bdevs_list": [ 00:10:54.890 { 00:10:54.890 "name": null, 00:10:54.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.890 "is_configured": false, 00:10:54.890 "data_offset": 2048, 00:10:54.890 "data_size": 63488 00:10:54.890 }, 00:10:54.890 { 00:10:54.890 "name": "pt2", 00:10:54.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:54.890 "is_configured": true, 00:10:54.890 "data_offset": 2048, 00:10:54.890 "data_size": 63488 00:10:54.890 }, 00:10:54.890 { 00:10:54.890 "name": null, 00:10:54.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:54.890 "is_configured": false, 00:10:54.890 "data_offset": 2048, 00:10:54.890 "data_size": 63488 00:10:54.890 } 00:10:54.890 ] 00:10:54.890 }' 00:10:54.890 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.890 02:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.150 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:10:55.150 02:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:55.410 [2024-07-25 02:35:42.289841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:55.410 [2024-07-25 02:35:42.289871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.410 [2024-07-25 02:35:42.289894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c75a434c80 00:10:55.410 [2024-07-25 02:35:42.289899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.410 [2024-07-25 02:35:42.289974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.410 [2024-07-25 02:35:42.289980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:55.410 [2024-07-25 02:35:42.289994] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:55.410 [2024-07-25 02:35:42.289999] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:55.410 [2024-07-25 02:35:42.290017] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c75a435180 00:10:55.410 [2024-07-25 02:35:42.290020] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.410 [2024-07-25 02:35:42.290034] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75a497e20 00:10:55.410 [2024-07-25 02:35:42.290063] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c75a435180 00:10:55.410 [2024-07-25 02:35:42.290065] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c75a435180 00:10:55.410 [2024-07-25 02:35:42.290079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.410 pt3 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.410 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.670 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.670 "name": "raid_bdev1", 00:10:55.670 "uuid": "9004f22e-4a2e-11ef-9c8e-7947904e2597", 00:10:55.670 "strip_size_kb": 0, 00:10:55.670 "state": "online", 00:10:55.670 "raid_level": "raid1", 00:10:55.670 "superblock": true, 00:10:55.670 "num_base_bdevs": 3, 00:10:55.670 "num_base_bdevs_discovered": 2, 00:10:55.670 "num_base_bdevs_operational": 2, 00:10:55.670 "base_bdevs_list": [ 00:10:55.670 { 00:10:55.670 "name": null, 00:10:55.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.670 "is_configured": false, 00:10:55.670 "data_offset": 2048, 00:10:55.670 "data_size": 63488 00:10:55.670 }, 00:10:55.670 { 00:10:55.670 "name": "pt2", 00:10:55.670 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:55.670 "is_configured": true, 00:10:55.670 "data_offset": 2048, 00:10:55.670 "data_size": 63488 00:10:55.670 }, 00:10:55.670 { 00:10:55.670 "name": "pt3", 00:10:55.670 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:55.670 "is_configured": true, 00:10:55.670 "data_offset": 2048, 00:10:55.670 "data_size": 63488 00:10:55.670 } 00:10:55.670 ] 00:10:55.670 }' 00:10:55.670 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.670 02:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.930 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:10:55.930 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:56.190 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:10:56.190 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:56.190 02:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:10:56.450 [2024-07-25 02:35:43.109783] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 9004f22e-4a2e-11ef-9c8e-7947904e2597 '!=' 9004f22e-4a2e-11ef-9c8e-7947904e2597 ']' 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57279 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57279 ']' 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57279 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57279 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:56.450 killing process with pid 57279 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57279' 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57279 00:10:56.450 [2024-07-25 02:35:43.139281] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.450 [2024-07-25 02:35:43.139295] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.450 [2024-07-25 02:35:43.139318] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.450 [2024-07-25 02:35:43.139322] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c75a435180 name raid_bdev1, state offline 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57279 00:10:56.450 [2024-07-25 02:35:43.153319] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:56.450 00:10:56.450 real 0m13.732s 00:10:56.450 user 0m24.437s 00:10:56.450 sys 0m2.396s 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.450 02:35:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.450 ************************************ 00:10:56.450 END TEST raid_superblock_test 00:10:56.450 ************************************ 00:10:56.709 02:35:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:56.709 02:35:43 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:56.709 02:35:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:56.709 02:35:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.709 02:35:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 ************************************ 00:10:56.709 START TEST raid_read_error_test 00:10:56.709 ************************************ 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.j5vh74qzGj 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=57813 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 57813 /var/tmp/spdk-raid.sock 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 57813 ']' 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.709 02:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 [2024-07-25 02:35:43.406333] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:10:56.709 [2024-07-25 02:35:43.406598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:56.968 EAL: TSC is not safe to use in SMP mode 00:10:56.968 EAL: TSC is not invariant 00:10:56.968 [2024-07-25 02:35:43.823961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.227 [2024-07-25 02:35:43.915092] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:57.227 [2024-07-25 02:35:43.916789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.227 [2024-07-25 02:35:43.917428] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.227 [2024-07-25 02:35:43.917440] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.486 02:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.486 02:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:57.486 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:57.486 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.745 BaseBdev1_malloc 00:10:57.745 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:57.745 true 00:10:57.745 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.004 [2024-07-25 02:35:44.796249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.004 [2024-07-25 02:35:44.796306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.004 [2024-07-25 02:35:44.796325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66513c34780 00:10:58.004 [2024-07-25 02:35:44.796331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.004 [2024-07-25 02:35:44.796782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.004 [2024-07-25 02:35:44.796825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.004 BaseBdev1 00:10:58.004 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:58.004 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.263 BaseBdev2_malloc 00:10:58.263 02:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:58.263 true 00:10:58.263 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.523 [2024-07-25 02:35:45.340201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.523 [2024-07-25 02:35:45.340254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.523 [2024-07-25 02:35:45.340272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66513c34c80 00:10:58.523 [2024-07-25 02:35:45.340278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.523 [2024-07-25 02:35:45.340709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.523 [2024-07-25 02:35:45.340733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.523 BaseBdev2 00:10:58.523 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:58.523 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:58.783 BaseBdev3_malloc 00:10:58.783 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:58.783 true 00:10:58.783 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.042 [2024-07-25 02:35:45.860157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.042 [2024-07-25 02:35:45.860194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.042 [2024-07-25 02:35:45.860213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x66513c35180 00:10:59.042 [2024-07-25 02:35:45.860218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.042 [2024-07-25 02:35:45.860663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.042 [2024-07-25 02:35:45.860687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.042 BaseBdev3 00:10:59.042 02:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:59.302 [2024-07-25 02:35:46.040156] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.302 [2024-07-25 02:35:46.040547] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.302 [2024-07-25 02:35:46.040573] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.302 [2024-07-25 02:35:46.040635] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x66513c35400 00:10:59.302 [2024-07-25 02:35:46.040640] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.302 [2024-07-25 02:35:46.040665] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x66513ca0e20 00:10:59.302 [2024-07-25 02:35:46.040718] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x66513c35400 00:10:59.302 [2024-07-25 02:35:46.040723] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x66513c35400 00:10:59.302 [2024-07-25 02:35:46.040746] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.302 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.562 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.562 "name": "raid_bdev1", 00:10:59.562 "uuid": "98918d61-4a2e-11ef-9c8e-7947904e2597", 00:10:59.562 "strip_size_kb": 0, 00:10:59.562 "state": "online", 00:10:59.562 "raid_level": "raid1", 00:10:59.562 "superblock": true, 00:10:59.562 "num_base_bdevs": 3, 00:10:59.562 "num_base_bdevs_discovered": 3, 00:10:59.562 "num_base_bdevs_operational": 3, 00:10:59.562 "base_bdevs_list": [ 00:10:59.562 { 00:10:59.562 "name": "BaseBdev1", 00:10:59.562 "uuid": "de6460f0-108b-fa54-b614-0a207103d895", 00:10:59.562 "is_configured": true, 00:10:59.562 "data_offset": 2048, 00:10:59.562 "data_size": 63488 00:10:59.562 }, 00:10:59.562 { 00:10:59.562 "name": "BaseBdev2", 00:10:59.562 "uuid": "6bd9fc71-16cb-a65d-b00f-e9523d911826", 00:10:59.562 "is_configured": true, 00:10:59.562 "data_offset": 2048, 00:10:59.562 "data_size": 63488 00:10:59.562 }, 00:10:59.562 { 00:10:59.562 "name": "BaseBdev3", 00:10:59.562 "uuid": "02e83e48-1891-625e-b680-2cc90571017e", 00:10:59.562 "is_configured": true, 00:10:59.562 "data_offset": 2048, 00:10:59.562 "data_size": 63488 00:10:59.562 } 00:10:59.562 ] 00:10:59.562 }' 00:10:59.562 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.562 02:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.822 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:59.822 02:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:59.822 [2024-07-25 02:35:46.600161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x66513ca0ec0 00:11:00.761 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:01.020 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:01.020 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.021 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.281 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.281 "name": "raid_bdev1", 00:11:01.281 "uuid": "98918d61-4a2e-11ef-9c8e-7947904e2597", 00:11:01.281 "strip_size_kb": 0, 00:11:01.281 "state": "online", 00:11:01.281 "raid_level": "raid1", 00:11:01.281 "superblock": true, 00:11:01.281 "num_base_bdevs": 3, 00:11:01.281 "num_base_bdevs_discovered": 3, 00:11:01.281 "num_base_bdevs_operational": 3, 00:11:01.281 "base_bdevs_list": [ 00:11:01.281 { 00:11:01.281 "name": "BaseBdev1", 00:11:01.281 "uuid": "de6460f0-108b-fa54-b614-0a207103d895", 00:11:01.281 "is_configured": true, 00:11:01.281 "data_offset": 2048, 00:11:01.281 "data_size": 63488 00:11:01.281 }, 00:11:01.281 { 00:11:01.281 "name": "BaseBdev2", 00:11:01.281 "uuid": "6bd9fc71-16cb-a65d-b00f-e9523d911826", 00:11:01.281 "is_configured": true, 00:11:01.281 "data_offset": 2048, 00:11:01.281 "data_size": 63488 00:11:01.281 }, 00:11:01.281 { 00:11:01.281 "name": "BaseBdev3", 00:11:01.281 "uuid": "02e83e48-1891-625e-b680-2cc90571017e", 00:11:01.281 "is_configured": true, 00:11:01.281 "data_offset": 2048, 00:11:01.281 "data_size": 63488 00:11:01.281 } 00:11:01.281 ] 00:11:01.281 }' 00:11:01.281 02:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.281 02:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.540 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:01.540 [2024-07-25 02:35:48.425986] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.540 [2024-07-25 02:35:48.426009] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.540 [2024-07-25 02:35:48.426298] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.540 [2024-07-25 02:35:48.426312] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.540 [2024-07-25 02:35:48.426327] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.540 [2024-07-25 02:35:48.426331] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x66513c35400 name raid_bdev1, state offline 00:11:01.540 0 00:11:01.540 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 57813 00:11:01.540 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 57813 ']' 00:11:01.540 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 57813 00:11:01.540 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 57813 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:01.799 killing process with pid 57813 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57813' 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 57813 00:11:01.799 [2024-07-25 02:35:48.457251] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 57813 00:11:01.799 [2024-07-25 02:35:48.471106] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.j5vh74qzGj 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:01.799 00:11:01.799 real 0m5.266s 00:11:01.799 user 0m7.913s 00:11:01.799 sys 0m0.880s 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.799 02:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.799 ************************************ 00:11:01.799 END TEST raid_read_error_test 00:11:01.799 ************************************ 00:11:01.799 02:35:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:01.799 02:35:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:01.799 02:35:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:01.799 02:35:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.799 02:35:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.059 ************************************ 00:11:02.059 START TEST raid_write_error_test 00:11:02.059 ************************************ 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qnITWwC6Jr 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=57940 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 57940 /var/tmp/spdk-raid.sock 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 57940 ']' 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:02.059 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:02.060 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:02.060 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.060 02:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.060 [2024-07-25 02:35:48.735906] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:11:02.060 [2024-07-25 02:35:48.736257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:02.319 EAL: TSC is not safe to use in SMP mode 00:11:02.319 EAL: TSC is not invariant 00:11:02.319 [2024-07-25 02:35:49.151742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.578 [2024-07-25 02:35:49.243987] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:02.578 [2024-07-25 02:35:49.245663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.578 [2024-07-25 02:35:49.246223] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.578 [2024-07-25 02:35:49.246234] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.838 02:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.838 02:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:02.838 02:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:02.838 02:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:03.097 BaseBdev1_malloc 00:11:03.097 02:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:03.097 true 00:11:03.097 02:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:03.357 [2024-07-25 02:35:50.109048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:03.357 [2024-07-25 02:35:50.109108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.357 [2024-07-25 02:35:50.109129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b13e4c34780 00:11:03.357 [2024-07-25 02:35:50.109135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.357 [2024-07-25 02:35:50.109571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.357 [2024-07-25 02:35:50.109600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:03.357 BaseBdev1 00:11:03.357 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:03.357 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:03.616 BaseBdev2_malloc 00:11:03.616 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:03.616 true 00:11:03.616 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:03.875 [2024-07-25 02:35:50.653018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:03.875 [2024-07-25 02:35:50.653054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.875 [2024-07-25 02:35:50.653074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b13e4c34c80 00:11:03.875 [2024-07-25 02:35:50.653080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.875 [2024-07-25 02:35:50.653534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.875 [2024-07-25 02:35:50.653561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:03.875 BaseBdev2 00:11:03.875 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:03.875 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.135 BaseBdev3_malloc 00:11:04.135 02:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:04.135 true 00:11:04.135 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.407 [2024-07-25 02:35:51.180994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.407 [2024-07-25 02:35:51.181029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.407 [2024-07-25 02:35:51.181066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b13e4c35180 00:11:04.407 [2024-07-25 02:35:51.181072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.407 [2024-07-25 02:35:51.181511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.407 [2024-07-25 02:35:51.181539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.407 BaseBdev3 00:11:04.407 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:04.672 [2024-07-25 02:35:51.364994] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.672 [2024-07-25 02:35:51.365363] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.672 [2024-07-25 02:35:51.365383] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.672 [2024-07-25 02:35:51.365434] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b13e4c35400 00:11:04.672 [2024-07-25 02:35:51.365444] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.672 [2024-07-25 02:35:51.365485] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b13e4ca0e20 00:11:04.672 [2024-07-25 02:35:51.365543] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b13e4c35400 00:11:04.672 [2024-07-25 02:35:51.365551] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b13e4c35400 00:11:04.672 [2024-07-25 02:35:51.365569] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.672 "name": "raid_bdev1", 00:11:04.672 "uuid": "9bbe0ee9-4a2e-11ef-9c8e-7947904e2597", 00:11:04.672 "strip_size_kb": 0, 00:11:04.672 "state": "online", 00:11:04.672 "raid_level": "raid1", 00:11:04.672 "superblock": true, 00:11:04.672 "num_base_bdevs": 3, 00:11:04.672 "num_base_bdevs_discovered": 3, 00:11:04.672 "num_base_bdevs_operational": 3, 00:11:04.672 "base_bdevs_list": [ 00:11:04.672 { 00:11:04.672 "name": "BaseBdev1", 00:11:04.672 "uuid": "58ac641d-110e-de54-8305-d421e7e5bf67", 00:11:04.672 "is_configured": true, 00:11:04.672 "data_offset": 2048, 00:11:04.672 "data_size": 63488 00:11:04.672 }, 00:11:04.672 { 00:11:04.672 "name": "BaseBdev2", 00:11:04.672 "uuid": "a0054958-f879-4955-b9cd-9e72277d2bdf", 00:11:04.672 "is_configured": true, 00:11:04.672 "data_offset": 2048, 00:11:04.672 "data_size": 63488 00:11:04.672 }, 00:11:04.672 { 00:11:04.672 "name": "BaseBdev3", 00:11:04.672 "uuid": "4e85be17-5f1f-c559-ad0b-0f66a48f84df", 00:11:04.672 "is_configured": true, 00:11:04.672 "data_offset": 2048, 00:11:04.672 "data_size": 63488 00:11:04.672 } 00:11:04.672 ] 00:11:04.672 }' 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.672 02:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:04.932 02:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:05.192 [2024-07-25 02:35:51.925015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b13e4ca0ec0 00:11:06.132 02:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:06.392 [2024-07-25 02:35:53.079141] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:06.392 [2024-07-25 02:35:53.079189] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.392 [2024-07-25 02:35:53.079314] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x3b13e4ca0ec0 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.392 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.392 "name": "raid_bdev1", 00:11:06.392 "uuid": "9bbe0ee9-4a2e-11ef-9c8e-7947904e2597", 00:11:06.392 "strip_size_kb": 0, 00:11:06.392 "state": "online", 00:11:06.392 "raid_level": "raid1", 00:11:06.392 "superblock": true, 00:11:06.392 "num_base_bdevs": 3, 00:11:06.392 "num_base_bdevs_discovered": 2, 00:11:06.392 "num_base_bdevs_operational": 2, 00:11:06.392 "base_bdevs_list": [ 00:11:06.392 { 00:11:06.392 "name": null, 00:11:06.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.392 "is_configured": false, 00:11:06.392 "data_offset": 2048, 00:11:06.392 "data_size": 63488 00:11:06.392 }, 00:11:06.392 { 00:11:06.392 "name": "BaseBdev2", 00:11:06.392 "uuid": "a0054958-f879-4955-b9cd-9e72277d2bdf", 00:11:06.392 "is_configured": true, 00:11:06.392 "data_offset": 2048, 00:11:06.392 "data_size": 63488 00:11:06.392 }, 00:11:06.392 { 00:11:06.392 "name": "BaseBdev3", 00:11:06.392 "uuid": "4e85be17-5f1f-c559-ad0b-0f66a48f84df", 00:11:06.392 "is_configured": true, 00:11:06.393 "data_offset": 2048, 00:11:06.393 "data_size": 63488 00:11:06.393 } 00:11:06.393 ] 00:11:06.393 }' 00:11:06.393 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.393 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:06.919 [2024-07-25 02:35:53.713112] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.919 [2024-07-25 02:35:53.713143] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.919 [2024-07-25 02:35:53.713416] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.919 [2024-07-25 02:35:53.713422] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.919 [2024-07-25 02:35:53.713434] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.919 [2024-07-25 02:35:53.713437] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b13e4c35400 name raid_bdev1, state offline 00:11:06.919 0 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 57940 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 57940 ']' 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 57940 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 57940 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:06.919 killing process with pid 57940 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57940' 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 57940 00:11:06.919 [2024-07-25 02:35:53.743662] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.919 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 57940 00:11:06.919 [2024-07-25 02:35:53.757417] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qnITWwC6Jr 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:07.179 00:11:07.179 real 0m5.220s 00:11:07.179 user 0m7.851s 00:11:07.179 sys 0m0.857s 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.179 02:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.179 ************************************ 00:11:07.179 END TEST raid_write_error_test 00:11:07.179 ************************************ 00:11:07.179 02:35:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:07.179 02:35:53 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:07.179 02:35:53 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:07.179 02:35:53 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:07.179 02:35:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:07.179 02:35:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.179 02:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.179 ************************************ 00:11:07.179 START TEST raid_state_function_test 00:11:07.179 ************************************ 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:07.179 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:07.180 02:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58065 00:11:07.180 Process raid pid: 58065 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58065' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58065 /var/tmp/spdk-raid.sock 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58065 ']' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.180 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.180 [2024-07-25 02:35:54.012358] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:11:07.180 [2024-07-25 02:35:54.012589] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:07.750 EAL: TSC is not safe to use in SMP mode 00:11:07.750 EAL: TSC is not invariant 00:11:07.750 [2024-07-25 02:35:54.429146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.750 [2024-07-25 02:35:54.521644] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:07.750 [2024-07-25 02:35:54.523310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.750 [2024-07-25 02:35:54.523916] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.750 [2024-07-25 02:35:54.523927] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.010 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.010 02:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:08.010 02:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:08.270 [2024-07-25 02:35:55.070712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.270 [2024-07-25 02:35:55.070750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.270 [2024-07-25 02:35:55.070753] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.270 [2024-07-25 02:35:55.070759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.270 [2024-07-25 02:35:55.070762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:08.270 [2024-07-25 02:35:55.070767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:08.270 [2024-07-25 02:35:55.070769] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:08.270 [2024-07-25 02:35:55.070791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.270 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.530 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.530 "name": "Existed_Raid", 00:11:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.530 "strip_size_kb": 64, 00:11:08.530 "state": "configuring", 00:11:08.530 "raid_level": "raid0", 00:11:08.530 "superblock": false, 00:11:08.530 "num_base_bdevs": 4, 00:11:08.530 "num_base_bdevs_discovered": 0, 00:11:08.530 "num_base_bdevs_operational": 4, 00:11:08.530 "base_bdevs_list": [ 00:11:08.530 { 00:11:08.530 "name": "BaseBdev1", 00:11:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.530 "is_configured": false, 00:11:08.530 "data_offset": 0, 00:11:08.530 "data_size": 0 00:11:08.530 }, 00:11:08.530 { 00:11:08.530 "name": "BaseBdev2", 00:11:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.530 "is_configured": false, 00:11:08.530 "data_offset": 0, 00:11:08.530 "data_size": 0 00:11:08.530 }, 00:11:08.530 { 00:11:08.530 "name": "BaseBdev3", 00:11:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.530 "is_configured": false, 00:11:08.530 "data_offset": 0, 00:11:08.530 "data_size": 0 00:11:08.530 }, 00:11:08.530 { 00:11:08.530 "name": "BaseBdev4", 00:11:08.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.530 "is_configured": false, 00:11:08.530 "data_offset": 0, 00:11:08.530 "data_size": 0 00:11:08.530 } 00:11:08.530 ] 00:11:08.530 }' 00:11:08.530 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.530 02:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.790 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:09.050 [2024-07-25 02:35:55.702672] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.050 [2024-07-25 02:35:55.702688] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x311af3434500 name Existed_Raid, state configuring 00:11:09.050 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:09.050 [2024-07-25 02:35:55.886666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.050 [2024-07-25 02:35:55.886693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.050 [2024-07-25 02:35:55.886696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.050 [2024-07-25 02:35:55.886701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.050 [2024-07-25 02:35:55.886703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.050 [2024-07-25 02:35:55.886708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.050 [2024-07-25 02:35:55.886711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.050 [2024-07-25 02:35:55.886715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.050 02:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.310 [2024-07-25 02:35:56.071434] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.310 BaseBdev1 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:09.310 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.570 [ 00:11:09.570 { 00:11:09.570 "name": "BaseBdev1", 00:11:09.570 "aliases": [ 00:11:09.570 "9e8c16aa-4a2e-11ef-9c8e-7947904e2597" 00:11:09.570 ], 00:11:09.570 "product_name": "Malloc disk", 00:11:09.570 "block_size": 512, 00:11:09.570 "num_blocks": 65536, 00:11:09.570 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:09.570 "assigned_rate_limits": { 00:11:09.570 "rw_ios_per_sec": 0, 00:11:09.570 "rw_mbytes_per_sec": 0, 00:11:09.570 "r_mbytes_per_sec": 0, 00:11:09.570 "w_mbytes_per_sec": 0 00:11:09.570 }, 00:11:09.570 "claimed": true, 00:11:09.570 "claim_type": "exclusive_write", 00:11:09.570 "zoned": false, 00:11:09.570 "supported_io_types": { 00:11:09.570 "read": true, 00:11:09.570 "write": true, 00:11:09.570 "unmap": true, 00:11:09.570 "flush": true, 00:11:09.570 "reset": true, 00:11:09.570 "nvme_admin": false, 00:11:09.570 "nvme_io": false, 00:11:09.570 "nvme_io_md": false, 00:11:09.570 "write_zeroes": true, 00:11:09.570 "zcopy": true, 00:11:09.570 "get_zone_info": false, 00:11:09.570 "zone_management": false, 00:11:09.570 "zone_append": false, 00:11:09.570 "compare": false, 00:11:09.570 "compare_and_write": false, 00:11:09.570 "abort": true, 00:11:09.570 "seek_hole": false, 00:11:09.570 "seek_data": false, 00:11:09.570 "copy": true, 00:11:09.570 "nvme_iov_md": false 00:11:09.570 }, 00:11:09.570 "memory_domains": [ 00:11:09.570 { 00:11:09.570 "dma_device_id": "system", 00:11:09.570 "dma_device_type": 1 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.570 "dma_device_type": 2 00:11:09.570 } 00:11:09.570 ], 00:11:09.570 "driver_specific": {} 00:11:09.570 } 00:11:09.570 ] 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.570 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.830 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.830 "name": "Existed_Raid", 00:11:09.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.830 "strip_size_kb": 64, 00:11:09.830 "state": "configuring", 00:11:09.830 "raid_level": "raid0", 00:11:09.830 "superblock": false, 00:11:09.830 "num_base_bdevs": 4, 00:11:09.830 "num_base_bdevs_discovered": 1, 00:11:09.830 "num_base_bdevs_operational": 4, 00:11:09.830 "base_bdevs_list": [ 00:11:09.830 { 00:11:09.830 "name": "BaseBdev1", 00:11:09.830 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:09.830 "is_configured": true, 00:11:09.830 "data_offset": 0, 00:11:09.830 "data_size": 65536 00:11:09.830 }, 00:11:09.830 { 00:11:09.830 "name": "BaseBdev2", 00:11:09.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.830 "is_configured": false, 00:11:09.830 "data_offset": 0, 00:11:09.830 "data_size": 0 00:11:09.830 }, 00:11:09.830 { 00:11:09.830 "name": "BaseBdev3", 00:11:09.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.830 "is_configured": false, 00:11:09.830 "data_offset": 0, 00:11:09.830 "data_size": 0 00:11:09.830 }, 00:11:09.830 { 00:11:09.830 "name": "BaseBdev4", 00:11:09.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.830 "is_configured": false, 00:11:09.830 "data_offset": 0, 00:11:09.830 "data_size": 0 00:11:09.830 } 00:11:09.830 ] 00:11:09.830 }' 00:11:09.830 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.830 02:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.090 02:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:10.350 [2024-07-25 02:35:57.066625] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.350 [2024-07-25 02:35:57.066641] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x311af3434500 name Existed_Raid, state configuring 00:11:10.350 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:10.350 [2024-07-25 02:35:57.246628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.350 [2024-07-25 02:35:57.247232] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.350 [2024-07-25 02:35:57.247268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.350 [2024-07-25 02:35:57.247272] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.350 [2024-07-25 02:35:57.247277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.350 [2024-07-25 02:35:57.247281] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.350 [2024-07-25 02:35:57.247286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.610 "name": "Existed_Raid", 00:11:10.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.610 "strip_size_kb": 64, 00:11:10.610 "state": "configuring", 00:11:10.610 "raid_level": "raid0", 00:11:10.610 "superblock": false, 00:11:10.610 "num_base_bdevs": 4, 00:11:10.610 "num_base_bdevs_discovered": 1, 00:11:10.610 "num_base_bdevs_operational": 4, 00:11:10.610 "base_bdevs_list": [ 00:11:10.610 { 00:11:10.610 "name": "BaseBdev1", 00:11:10.610 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:10.610 "is_configured": true, 00:11:10.610 "data_offset": 0, 00:11:10.610 "data_size": 65536 00:11:10.610 }, 00:11:10.610 { 00:11:10.610 "name": "BaseBdev2", 00:11:10.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.610 "is_configured": false, 00:11:10.610 "data_offset": 0, 00:11:10.610 "data_size": 0 00:11:10.610 }, 00:11:10.610 { 00:11:10.610 "name": "BaseBdev3", 00:11:10.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.610 "is_configured": false, 00:11:10.610 "data_offset": 0, 00:11:10.610 "data_size": 0 00:11:10.610 }, 00:11:10.610 { 00:11:10.610 "name": "BaseBdev4", 00:11:10.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.610 "is_configured": false, 00:11:10.610 "data_offset": 0, 00:11:10.610 "data_size": 0 00:11:10.610 } 00:11:10.610 ] 00:11:10.610 }' 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.610 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.870 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.130 [2024-07-25 02:35:57.890701] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.130 BaseBdev2 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:11.130 02:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:11.389 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.389 [ 00:11:11.389 { 00:11:11.389 "name": "BaseBdev2", 00:11:11.389 "aliases": [ 00:11:11.389 "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597" 00:11:11.389 ], 00:11:11.389 "product_name": "Malloc disk", 00:11:11.389 "block_size": 512, 00:11:11.389 "num_blocks": 65536, 00:11:11.389 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:11.389 "assigned_rate_limits": { 00:11:11.389 "rw_ios_per_sec": 0, 00:11:11.389 "rw_mbytes_per_sec": 0, 00:11:11.389 "r_mbytes_per_sec": 0, 00:11:11.389 "w_mbytes_per_sec": 0 00:11:11.389 }, 00:11:11.389 "claimed": true, 00:11:11.389 "claim_type": "exclusive_write", 00:11:11.389 "zoned": false, 00:11:11.389 "supported_io_types": { 00:11:11.389 "read": true, 00:11:11.389 "write": true, 00:11:11.389 "unmap": true, 00:11:11.389 "flush": true, 00:11:11.389 "reset": true, 00:11:11.389 "nvme_admin": false, 00:11:11.389 "nvme_io": false, 00:11:11.389 "nvme_io_md": false, 00:11:11.389 "write_zeroes": true, 00:11:11.389 "zcopy": true, 00:11:11.389 "get_zone_info": false, 00:11:11.389 "zone_management": false, 00:11:11.389 "zone_append": false, 00:11:11.389 "compare": false, 00:11:11.389 "compare_and_write": false, 00:11:11.389 "abort": true, 00:11:11.389 "seek_hole": false, 00:11:11.389 "seek_data": false, 00:11:11.389 "copy": true, 00:11:11.389 "nvme_iov_md": false 00:11:11.389 }, 00:11:11.389 "memory_domains": [ 00:11:11.389 { 00:11:11.389 "dma_device_id": "system", 00:11:11.389 "dma_device_type": 1 00:11:11.389 }, 00:11:11.389 { 00:11:11.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.389 "dma_device_type": 2 00:11:11.389 } 00:11:11.389 ], 00:11:11.389 "driver_specific": {} 00:11:11.389 } 00:11:11.390 ] 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.390 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.649 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.649 "name": "Existed_Raid", 00:11:11.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.649 "strip_size_kb": 64, 00:11:11.649 "state": "configuring", 00:11:11.649 "raid_level": "raid0", 00:11:11.649 "superblock": false, 00:11:11.649 "num_base_bdevs": 4, 00:11:11.649 "num_base_bdevs_discovered": 2, 00:11:11.649 "num_base_bdevs_operational": 4, 00:11:11.649 "base_bdevs_list": [ 00:11:11.649 { 00:11:11.649 "name": "BaseBdev1", 00:11:11.649 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:11.649 "is_configured": true, 00:11:11.649 "data_offset": 0, 00:11:11.649 "data_size": 65536 00:11:11.649 }, 00:11:11.649 { 00:11:11.649 "name": "BaseBdev2", 00:11:11.649 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:11.649 "is_configured": true, 00:11:11.649 "data_offset": 0, 00:11:11.649 "data_size": 65536 00:11:11.649 }, 00:11:11.649 { 00:11:11.649 "name": "BaseBdev3", 00:11:11.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.649 "is_configured": false, 00:11:11.649 "data_offset": 0, 00:11:11.649 "data_size": 0 00:11:11.649 }, 00:11:11.649 { 00:11:11.649 "name": "BaseBdev4", 00:11:11.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.649 "is_configured": false, 00:11:11.649 "data_offset": 0, 00:11:11.649 "data_size": 0 00:11:11.649 } 00:11:11.649 ] 00:11:11.649 }' 00:11:11.649 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.649 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.909 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.168 [2024-07-25 02:35:58.882658] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.168 BaseBdev3 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:12.168 02:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:12.168 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.427 [ 00:11:12.427 { 00:11:12.427 "name": "BaseBdev3", 00:11:12.427 "aliases": [ 00:11:12.427 "a039274a-4a2e-11ef-9c8e-7947904e2597" 00:11:12.427 ], 00:11:12.427 "product_name": "Malloc disk", 00:11:12.427 "block_size": 512, 00:11:12.427 "num_blocks": 65536, 00:11:12.427 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:12.427 "assigned_rate_limits": { 00:11:12.427 "rw_ios_per_sec": 0, 00:11:12.427 "rw_mbytes_per_sec": 0, 00:11:12.427 "r_mbytes_per_sec": 0, 00:11:12.427 "w_mbytes_per_sec": 0 00:11:12.427 }, 00:11:12.427 "claimed": true, 00:11:12.427 "claim_type": "exclusive_write", 00:11:12.427 "zoned": false, 00:11:12.427 "supported_io_types": { 00:11:12.427 "read": true, 00:11:12.427 "write": true, 00:11:12.427 "unmap": true, 00:11:12.427 "flush": true, 00:11:12.427 "reset": true, 00:11:12.427 "nvme_admin": false, 00:11:12.427 "nvme_io": false, 00:11:12.427 "nvme_io_md": false, 00:11:12.427 "write_zeroes": true, 00:11:12.427 "zcopy": true, 00:11:12.427 "get_zone_info": false, 00:11:12.427 "zone_management": false, 00:11:12.427 "zone_append": false, 00:11:12.427 "compare": false, 00:11:12.427 "compare_and_write": false, 00:11:12.427 "abort": true, 00:11:12.427 "seek_hole": false, 00:11:12.427 "seek_data": false, 00:11:12.427 "copy": true, 00:11:12.427 "nvme_iov_md": false 00:11:12.427 }, 00:11:12.427 "memory_domains": [ 00:11:12.427 { 00:11:12.427 "dma_device_id": "system", 00:11:12.427 "dma_device_type": 1 00:11:12.427 }, 00:11:12.427 { 00:11:12.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.427 "dma_device_type": 2 00:11:12.427 } 00:11:12.427 ], 00:11:12.427 "driver_specific": {} 00:11:12.427 } 00:11:12.427 ] 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.427 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.687 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.687 "name": "Existed_Raid", 00:11:12.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.687 "strip_size_kb": 64, 00:11:12.687 "state": "configuring", 00:11:12.687 "raid_level": "raid0", 00:11:12.687 "superblock": false, 00:11:12.687 "num_base_bdevs": 4, 00:11:12.687 "num_base_bdevs_discovered": 3, 00:11:12.687 "num_base_bdevs_operational": 4, 00:11:12.687 "base_bdevs_list": [ 00:11:12.687 { 00:11:12.687 "name": "BaseBdev1", 00:11:12.687 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:12.687 "is_configured": true, 00:11:12.687 "data_offset": 0, 00:11:12.687 "data_size": 65536 00:11:12.687 }, 00:11:12.687 { 00:11:12.687 "name": "BaseBdev2", 00:11:12.687 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:12.687 "is_configured": true, 00:11:12.687 "data_offset": 0, 00:11:12.687 "data_size": 65536 00:11:12.687 }, 00:11:12.687 { 00:11:12.687 "name": "BaseBdev3", 00:11:12.687 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:12.687 "is_configured": true, 00:11:12.687 "data_offset": 0, 00:11:12.687 "data_size": 65536 00:11:12.687 }, 00:11:12.687 { 00:11:12.687 "name": "BaseBdev4", 00:11:12.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.687 "is_configured": false, 00:11:12.687 "data_offset": 0, 00:11:12.687 "data_size": 0 00:11:12.687 } 00:11:12.687 ] 00:11:12.687 }' 00:11:12.687 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.687 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.946 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:12.946 [2024-07-25 02:35:59.850637] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.946 [2024-07-25 02:35:59.850655] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x311af3434a00 00:11:12.946 [2024-07-25 02:35:59.850659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:12.946 [2024-07-25 02:35:59.850681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x311af3497e20 00:11:12.946 [2024-07-25 02:35:59.850751] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x311af3434a00 00:11:12.946 [2024-07-25 02:35:59.850754] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x311af3434a00 00:11:12.946 [2024-07-25 02:35:59.850778] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.206 BaseBdev4 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:13.206 02:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:13.206 02:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.465 [ 00:11:13.465 { 00:11:13.465 "name": "BaseBdev4", 00:11:13.465 "aliases": [ 00:11:13.465 "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597" 00:11:13.465 ], 00:11:13.465 "product_name": "Malloc disk", 00:11:13.465 "block_size": 512, 00:11:13.465 "num_blocks": 65536, 00:11:13.465 "uuid": "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597", 00:11:13.465 "assigned_rate_limits": { 00:11:13.465 "rw_ios_per_sec": 0, 00:11:13.465 "rw_mbytes_per_sec": 0, 00:11:13.465 "r_mbytes_per_sec": 0, 00:11:13.465 "w_mbytes_per_sec": 0 00:11:13.465 }, 00:11:13.465 "claimed": true, 00:11:13.465 "claim_type": "exclusive_write", 00:11:13.465 "zoned": false, 00:11:13.465 "supported_io_types": { 00:11:13.465 "read": true, 00:11:13.465 "write": true, 00:11:13.465 "unmap": true, 00:11:13.465 "flush": true, 00:11:13.465 "reset": true, 00:11:13.465 "nvme_admin": false, 00:11:13.465 "nvme_io": false, 00:11:13.465 "nvme_io_md": false, 00:11:13.465 "write_zeroes": true, 00:11:13.465 "zcopy": true, 00:11:13.465 "get_zone_info": false, 00:11:13.465 "zone_management": false, 00:11:13.465 "zone_append": false, 00:11:13.465 "compare": false, 00:11:13.465 "compare_and_write": false, 00:11:13.465 "abort": true, 00:11:13.465 "seek_hole": false, 00:11:13.465 "seek_data": false, 00:11:13.465 "copy": true, 00:11:13.465 "nvme_iov_md": false 00:11:13.465 }, 00:11:13.465 "memory_domains": [ 00:11:13.465 { 00:11:13.465 "dma_device_id": "system", 00:11:13.465 "dma_device_type": 1 00:11:13.465 }, 00:11:13.465 { 00:11:13.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.465 "dma_device_type": 2 00:11:13.465 } 00:11:13.465 ], 00:11:13.465 "driver_specific": {} 00:11:13.465 } 00:11:13.465 ] 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:13.465 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.466 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.725 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.725 "name": "Existed_Raid", 00:11:13.725 "uuid": "a0ccdeb6-4a2e-11ef-9c8e-7947904e2597", 00:11:13.725 "strip_size_kb": 64, 00:11:13.725 "state": "online", 00:11:13.725 "raid_level": "raid0", 00:11:13.725 "superblock": false, 00:11:13.725 "num_base_bdevs": 4, 00:11:13.725 "num_base_bdevs_discovered": 4, 00:11:13.725 "num_base_bdevs_operational": 4, 00:11:13.725 "base_bdevs_list": [ 00:11:13.725 { 00:11:13.725 "name": "BaseBdev1", 00:11:13.725 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:13.725 "is_configured": true, 00:11:13.725 "data_offset": 0, 00:11:13.725 "data_size": 65536 00:11:13.725 }, 00:11:13.725 { 00:11:13.725 "name": "BaseBdev2", 00:11:13.725 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:13.725 "is_configured": true, 00:11:13.725 "data_offset": 0, 00:11:13.725 "data_size": 65536 00:11:13.725 }, 00:11:13.725 { 00:11:13.725 "name": "BaseBdev3", 00:11:13.725 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:13.726 "is_configured": true, 00:11:13.726 "data_offset": 0, 00:11:13.726 "data_size": 65536 00:11:13.726 }, 00:11:13.726 { 00:11:13.726 "name": "BaseBdev4", 00:11:13.726 "uuid": "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597", 00:11:13.726 "is_configured": true, 00:11:13.726 "data_offset": 0, 00:11:13.726 "data_size": 65536 00:11:13.726 } 00:11:13.726 ] 00:11:13.726 }' 00:11:13.726 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.726 02:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:13.985 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:13.986 [2024-07-25 02:36:00.858587] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:13.986 "name": "Existed_Raid", 00:11:13.986 "aliases": [ 00:11:13.986 "a0ccdeb6-4a2e-11ef-9c8e-7947904e2597" 00:11:13.986 ], 00:11:13.986 "product_name": "Raid Volume", 00:11:13.986 "block_size": 512, 00:11:13.986 "num_blocks": 262144, 00:11:13.986 "uuid": "a0ccdeb6-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "assigned_rate_limits": { 00:11:13.986 "rw_ios_per_sec": 0, 00:11:13.986 "rw_mbytes_per_sec": 0, 00:11:13.986 "r_mbytes_per_sec": 0, 00:11:13.986 "w_mbytes_per_sec": 0 00:11:13.986 }, 00:11:13.986 "claimed": false, 00:11:13.986 "zoned": false, 00:11:13.986 "supported_io_types": { 00:11:13.986 "read": true, 00:11:13.986 "write": true, 00:11:13.986 "unmap": true, 00:11:13.986 "flush": true, 00:11:13.986 "reset": true, 00:11:13.986 "nvme_admin": false, 00:11:13.986 "nvme_io": false, 00:11:13.986 "nvme_io_md": false, 00:11:13.986 "write_zeroes": true, 00:11:13.986 "zcopy": false, 00:11:13.986 "get_zone_info": false, 00:11:13.986 "zone_management": false, 00:11:13.986 "zone_append": false, 00:11:13.986 "compare": false, 00:11:13.986 "compare_and_write": false, 00:11:13.986 "abort": false, 00:11:13.986 "seek_hole": false, 00:11:13.986 "seek_data": false, 00:11:13.986 "copy": false, 00:11:13.986 "nvme_iov_md": false 00:11:13.986 }, 00:11:13.986 "memory_domains": [ 00:11:13.986 { 00:11:13.986 "dma_device_id": "system", 00:11:13.986 "dma_device_type": 1 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.986 "dma_device_type": 2 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "system", 00:11:13.986 "dma_device_type": 1 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.986 "dma_device_type": 2 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "system", 00:11:13.986 "dma_device_type": 1 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.986 "dma_device_type": 2 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "system", 00:11:13.986 "dma_device_type": 1 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.986 "dma_device_type": 2 00:11:13.986 } 00:11:13.986 ], 00:11:13.986 "driver_specific": { 00:11:13.986 "raid": { 00:11:13.986 "uuid": "a0ccdeb6-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "strip_size_kb": 64, 00:11:13.986 "state": "online", 00:11:13.986 "raid_level": "raid0", 00:11:13.986 "superblock": false, 00:11:13.986 "num_base_bdevs": 4, 00:11:13.986 "num_base_bdevs_discovered": 4, 00:11:13.986 "num_base_bdevs_operational": 4, 00:11:13.986 "base_bdevs_list": [ 00:11:13.986 { 00:11:13.986 "name": "BaseBdev1", 00:11:13.986 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "is_configured": true, 00:11:13.986 "data_offset": 0, 00:11:13.986 "data_size": 65536 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "name": "BaseBdev2", 00:11:13.986 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "is_configured": true, 00:11:13.986 "data_offset": 0, 00:11:13.986 "data_size": 65536 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "name": "BaseBdev3", 00:11:13.986 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "is_configured": true, 00:11:13.986 "data_offset": 0, 00:11:13.986 "data_size": 65536 00:11:13.986 }, 00:11:13.986 { 00:11:13.986 "name": "BaseBdev4", 00:11:13.986 "uuid": "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597", 00:11:13.986 "is_configured": true, 00:11:13.986 "data_offset": 0, 00:11:13.986 "data_size": 65536 00:11:13.986 } 00:11:13.986 ] 00:11:13.986 } 00:11:13.986 } 00:11:13.986 }' 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:13.986 BaseBdev2 00:11:13.986 BaseBdev3 00:11:13.986 BaseBdev4' 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.986 02:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.245 "name": "BaseBdev1", 00:11:14.245 "aliases": [ 00:11:14.245 "9e8c16aa-4a2e-11ef-9c8e-7947904e2597" 00:11:14.245 ], 00:11:14.245 "product_name": "Malloc disk", 00:11:14.245 "block_size": 512, 00:11:14.245 "num_blocks": 65536, 00:11:14.245 "uuid": "9e8c16aa-4a2e-11ef-9c8e-7947904e2597", 00:11:14.245 "assigned_rate_limits": { 00:11:14.245 "rw_ios_per_sec": 0, 00:11:14.245 "rw_mbytes_per_sec": 0, 00:11:14.245 "r_mbytes_per_sec": 0, 00:11:14.245 "w_mbytes_per_sec": 0 00:11:14.245 }, 00:11:14.245 "claimed": true, 00:11:14.245 "claim_type": "exclusive_write", 00:11:14.245 "zoned": false, 00:11:14.245 "supported_io_types": { 00:11:14.245 "read": true, 00:11:14.245 "write": true, 00:11:14.245 "unmap": true, 00:11:14.245 "flush": true, 00:11:14.245 "reset": true, 00:11:14.245 "nvme_admin": false, 00:11:14.245 "nvme_io": false, 00:11:14.245 "nvme_io_md": false, 00:11:14.245 "write_zeroes": true, 00:11:14.245 "zcopy": true, 00:11:14.245 "get_zone_info": false, 00:11:14.245 "zone_management": false, 00:11:14.245 "zone_append": false, 00:11:14.245 "compare": false, 00:11:14.245 "compare_and_write": false, 00:11:14.245 "abort": true, 00:11:14.245 "seek_hole": false, 00:11:14.245 "seek_data": false, 00:11:14.245 "copy": true, 00:11:14.245 "nvme_iov_md": false 00:11:14.245 }, 00:11:14.245 "memory_domains": [ 00:11:14.245 { 00:11:14.245 "dma_device_id": "system", 00:11:14.245 "dma_device_type": 1 00:11:14.245 }, 00:11:14.245 { 00:11:14.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.245 "dma_device_type": 2 00:11:14.245 } 00:11:14.245 ], 00:11:14.245 "driver_specific": {} 00:11:14.245 }' 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.245 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.246 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.246 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.505 "name": "BaseBdev2", 00:11:14.505 "aliases": [ 00:11:14.505 "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597" 00:11:14.505 ], 00:11:14.505 "product_name": "Malloc disk", 00:11:14.505 "block_size": 512, 00:11:14.505 "num_blocks": 65536, 00:11:14.505 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:14.505 "assigned_rate_limits": { 00:11:14.505 "rw_ios_per_sec": 0, 00:11:14.505 "rw_mbytes_per_sec": 0, 00:11:14.505 "r_mbytes_per_sec": 0, 00:11:14.505 "w_mbytes_per_sec": 0 00:11:14.505 }, 00:11:14.505 "claimed": true, 00:11:14.505 "claim_type": "exclusive_write", 00:11:14.505 "zoned": false, 00:11:14.505 "supported_io_types": { 00:11:14.505 "read": true, 00:11:14.505 "write": true, 00:11:14.505 "unmap": true, 00:11:14.505 "flush": true, 00:11:14.505 "reset": true, 00:11:14.505 "nvme_admin": false, 00:11:14.505 "nvme_io": false, 00:11:14.505 "nvme_io_md": false, 00:11:14.505 "write_zeroes": true, 00:11:14.505 "zcopy": true, 00:11:14.505 "get_zone_info": false, 00:11:14.505 "zone_management": false, 00:11:14.505 "zone_append": false, 00:11:14.505 "compare": false, 00:11:14.505 "compare_and_write": false, 00:11:14.505 "abort": true, 00:11:14.505 "seek_hole": false, 00:11:14.505 "seek_data": false, 00:11:14.505 "copy": true, 00:11:14.505 "nvme_iov_md": false 00:11:14.505 }, 00:11:14.505 "memory_domains": [ 00:11:14.505 { 00:11:14.505 "dma_device_id": "system", 00:11:14.505 "dma_device_type": 1 00:11:14.505 }, 00:11:14.505 { 00:11:14.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.505 "dma_device_type": 2 00:11:14.505 } 00:11:14.505 ], 00:11:14.505 "driver_specific": {} 00:11:14.505 }' 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.505 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.764 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.764 "name": "BaseBdev3", 00:11:14.764 "aliases": [ 00:11:14.764 "a039274a-4a2e-11ef-9c8e-7947904e2597" 00:11:14.764 ], 00:11:14.764 "product_name": "Malloc disk", 00:11:14.764 "block_size": 512, 00:11:14.764 "num_blocks": 65536, 00:11:14.764 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:14.764 "assigned_rate_limits": { 00:11:14.764 "rw_ios_per_sec": 0, 00:11:14.764 "rw_mbytes_per_sec": 0, 00:11:14.764 "r_mbytes_per_sec": 0, 00:11:14.764 "w_mbytes_per_sec": 0 00:11:14.764 }, 00:11:14.764 "claimed": true, 00:11:14.764 "claim_type": "exclusive_write", 00:11:14.764 "zoned": false, 00:11:14.764 "supported_io_types": { 00:11:14.764 "read": true, 00:11:14.764 "write": true, 00:11:14.764 "unmap": true, 00:11:14.764 "flush": true, 00:11:14.764 "reset": true, 00:11:14.764 "nvme_admin": false, 00:11:14.764 "nvme_io": false, 00:11:14.764 "nvme_io_md": false, 00:11:14.764 "write_zeroes": true, 00:11:14.764 "zcopy": true, 00:11:14.764 "get_zone_info": false, 00:11:14.764 "zone_management": false, 00:11:14.764 "zone_append": false, 00:11:14.764 "compare": false, 00:11:14.764 "compare_and_write": false, 00:11:14.764 "abort": true, 00:11:14.764 "seek_hole": false, 00:11:14.764 "seek_data": false, 00:11:14.764 "copy": true, 00:11:14.764 "nvme_iov_md": false 00:11:14.764 }, 00:11:14.764 "memory_domains": [ 00:11:14.764 { 00:11:14.764 "dma_device_id": "system", 00:11:14.764 "dma_device_type": 1 00:11:14.764 }, 00:11:14.764 { 00:11:14.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.765 "dma_device_type": 2 00:11:14.765 } 00:11:14.765 ], 00:11:14.765 "driver_specific": {} 00:11:14.765 }' 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.765 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:15.024 "name": "BaseBdev4", 00:11:15.024 "aliases": [ 00:11:15.024 "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597" 00:11:15.024 ], 00:11:15.024 "product_name": "Malloc disk", 00:11:15.024 "block_size": 512, 00:11:15.024 "num_blocks": 65536, 00:11:15.024 "uuid": "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597", 00:11:15.024 "assigned_rate_limits": { 00:11:15.024 "rw_ios_per_sec": 0, 00:11:15.024 "rw_mbytes_per_sec": 0, 00:11:15.024 "r_mbytes_per_sec": 0, 00:11:15.024 "w_mbytes_per_sec": 0 00:11:15.024 }, 00:11:15.024 "claimed": true, 00:11:15.024 "claim_type": "exclusive_write", 00:11:15.024 "zoned": false, 00:11:15.024 "supported_io_types": { 00:11:15.024 "read": true, 00:11:15.024 "write": true, 00:11:15.024 "unmap": true, 00:11:15.024 "flush": true, 00:11:15.024 "reset": true, 00:11:15.024 "nvme_admin": false, 00:11:15.024 "nvme_io": false, 00:11:15.024 "nvme_io_md": false, 00:11:15.024 "write_zeroes": true, 00:11:15.024 "zcopy": true, 00:11:15.024 "get_zone_info": false, 00:11:15.024 "zone_management": false, 00:11:15.024 "zone_append": false, 00:11:15.024 "compare": false, 00:11:15.024 "compare_and_write": false, 00:11:15.024 "abort": true, 00:11:15.024 "seek_hole": false, 00:11:15.024 "seek_data": false, 00:11:15.024 "copy": true, 00:11:15.024 "nvme_iov_md": false 00:11:15.024 }, 00:11:15.024 "memory_domains": [ 00:11:15.024 { 00:11:15.024 "dma_device_id": "system", 00:11:15.024 "dma_device_type": 1 00:11:15.024 }, 00:11:15.024 { 00:11:15.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.024 "dma_device_type": 2 00:11:15.024 } 00:11:15.024 ], 00:11:15.024 "driver_specific": {} 00:11:15.024 }' 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.024 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:15.284 02:36:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:15.284 [2024-07-25 02:36:02.146549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.284 [2024-07-25 02:36:02.146563] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.284 [2024-07-25 02:36:02.146572] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.284 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.543 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.543 "name": "Existed_Raid", 00:11:15.543 "uuid": "a0ccdeb6-4a2e-11ef-9c8e-7947904e2597", 00:11:15.543 "strip_size_kb": 64, 00:11:15.543 "state": "offline", 00:11:15.543 "raid_level": "raid0", 00:11:15.543 "superblock": false, 00:11:15.543 "num_base_bdevs": 4, 00:11:15.543 "num_base_bdevs_discovered": 3, 00:11:15.543 "num_base_bdevs_operational": 3, 00:11:15.543 "base_bdevs_list": [ 00:11:15.543 { 00:11:15.543 "name": null, 00:11:15.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.543 "is_configured": false, 00:11:15.543 "data_offset": 0, 00:11:15.543 "data_size": 65536 00:11:15.543 }, 00:11:15.543 { 00:11:15.543 "name": "BaseBdev2", 00:11:15.543 "uuid": "9fa1c9f8-4a2e-11ef-9c8e-7947904e2597", 00:11:15.543 "is_configured": true, 00:11:15.543 "data_offset": 0, 00:11:15.543 "data_size": 65536 00:11:15.543 }, 00:11:15.543 { 00:11:15.543 "name": "BaseBdev3", 00:11:15.543 "uuid": "a039274a-4a2e-11ef-9c8e-7947904e2597", 00:11:15.543 "is_configured": true, 00:11:15.543 "data_offset": 0, 00:11:15.543 "data_size": 65536 00:11:15.543 }, 00:11:15.543 { 00:11:15.543 "name": "BaseBdev4", 00:11:15.543 "uuid": "a0ccdb0a-4a2e-11ef-9c8e-7947904e2597", 00:11:15.543 "is_configured": true, 00:11:15.543 "data_offset": 0, 00:11:15.543 "data_size": 65536 00:11:15.543 } 00:11:15.543 ] 00:11:15.543 }' 00:11:15.543 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.543 02:36:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.802 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:15.802 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:15.802 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.802 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:16.061 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:16.061 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.061 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:16.320 [2024-07-25 02:36:02.975191] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.320 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:16.320 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.320 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:16.320 02:36:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.320 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:16.320 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.320 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:16.578 [2024-07-25 02:36:03.339863] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.578 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:16.578 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.578 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.578 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:11:16.836 [2024-07-25 02:36:03.688536] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:16.836 [2024-07-25 02:36:03.688552] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x311af3434a00 name Existed_Raid, state offline 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.836 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.095 02:36:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.354 BaseBdev2 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.354 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.613 [ 00:11:17.613 { 00:11:17.613 "name": "BaseBdev2", 00:11:17.613 "aliases": [ 00:11:17.613 "a34ebe31-4a2e-11ef-9c8e-7947904e2597" 00:11:17.613 ], 00:11:17.613 "product_name": "Malloc disk", 00:11:17.613 "block_size": 512, 00:11:17.613 "num_blocks": 65536, 00:11:17.613 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:17.613 "assigned_rate_limits": { 00:11:17.613 "rw_ios_per_sec": 0, 00:11:17.613 "rw_mbytes_per_sec": 0, 00:11:17.613 "r_mbytes_per_sec": 0, 00:11:17.613 "w_mbytes_per_sec": 0 00:11:17.613 }, 00:11:17.613 "claimed": false, 00:11:17.613 "zoned": false, 00:11:17.613 "supported_io_types": { 00:11:17.613 "read": true, 00:11:17.613 "write": true, 00:11:17.613 "unmap": true, 00:11:17.613 "flush": true, 00:11:17.613 "reset": true, 00:11:17.613 "nvme_admin": false, 00:11:17.613 "nvme_io": false, 00:11:17.613 "nvme_io_md": false, 00:11:17.613 "write_zeroes": true, 00:11:17.613 "zcopy": true, 00:11:17.613 "get_zone_info": false, 00:11:17.613 "zone_management": false, 00:11:17.613 "zone_append": false, 00:11:17.613 "compare": false, 00:11:17.613 "compare_and_write": false, 00:11:17.613 "abort": true, 00:11:17.613 "seek_hole": false, 00:11:17.613 "seek_data": false, 00:11:17.613 "copy": true, 00:11:17.613 "nvme_iov_md": false 00:11:17.613 }, 00:11:17.613 "memory_domains": [ 00:11:17.613 { 00:11:17.613 "dma_device_id": "system", 00:11:17.613 "dma_device_type": 1 00:11:17.613 }, 00:11:17.613 { 00:11:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.613 "dma_device_type": 2 00:11:17.613 } 00:11:17.613 ], 00:11:17.613 "driver_specific": {} 00:11:17.613 } 00:11:17.613 ] 00:11:17.613 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:17.613 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:17.613 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.613 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.872 BaseBdev3 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.872 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.131 [ 00:11:18.131 { 00:11:18.131 "name": "BaseBdev3", 00:11:18.131 "aliases": [ 00:11:18.131 "a39eb331-4a2e-11ef-9c8e-7947904e2597" 00:11:18.131 ], 00:11:18.131 "product_name": "Malloc disk", 00:11:18.131 "block_size": 512, 00:11:18.131 "num_blocks": 65536, 00:11:18.131 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:18.131 "assigned_rate_limits": { 00:11:18.131 "rw_ios_per_sec": 0, 00:11:18.131 "rw_mbytes_per_sec": 0, 00:11:18.131 "r_mbytes_per_sec": 0, 00:11:18.131 "w_mbytes_per_sec": 0 00:11:18.131 }, 00:11:18.131 "claimed": false, 00:11:18.131 "zoned": false, 00:11:18.131 "supported_io_types": { 00:11:18.131 "read": true, 00:11:18.131 "write": true, 00:11:18.131 "unmap": true, 00:11:18.131 "flush": true, 00:11:18.131 "reset": true, 00:11:18.131 "nvme_admin": false, 00:11:18.131 "nvme_io": false, 00:11:18.131 "nvme_io_md": false, 00:11:18.131 "write_zeroes": true, 00:11:18.131 "zcopy": true, 00:11:18.131 "get_zone_info": false, 00:11:18.131 "zone_management": false, 00:11:18.131 "zone_append": false, 00:11:18.131 "compare": false, 00:11:18.131 "compare_and_write": false, 00:11:18.131 "abort": true, 00:11:18.131 "seek_hole": false, 00:11:18.131 "seek_data": false, 00:11:18.131 "copy": true, 00:11:18.131 "nvme_iov_md": false 00:11:18.131 }, 00:11:18.131 "memory_domains": [ 00:11:18.131 { 00:11:18.131 "dma_device_id": "system", 00:11:18.131 "dma_device_type": 1 00:11:18.131 }, 00:11:18.131 { 00:11:18.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.131 "dma_device_type": 2 00:11:18.131 } 00:11:18.131 ], 00:11:18.131 "driver_specific": {} 00:11:18.131 } 00:11:18.131 ] 00:11:18.131 02:36:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:18.131 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:18.131 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:18.131 02:36:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.390 BaseBdev4 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:18.390 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.683 [ 00:11:18.683 { 00:11:18.683 "name": "BaseBdev4", 00:11:18.683 "aliases": [ 00:11:18.683 "a3ed6f39-4a2e-11ef-9c8e-7947904e2597" 00:11:18.683 ], 00:11:18.683 "product_name": "Malloc disk", 00:11:18.683 "block_size": 512, 00:11:18.683 "num_blocks": 65536, 00:11:18.683 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:18.683 "assigned_rate_limits": { 00:11:18.683 "rw_ios_per_sec": 0, 00:11:18.683 "rw_mbytes_per_sec": 0, 00:11:18.683 "r_mbytes_per_sec": 0, 00:11:18.683 "w_mbytes_per_sec": 0 00:11:18.683 }, 00:11:18.683 "claimed": false, 00:11:18.683 "zoned": false, 00:11:18.683 "supported_io_types": { 00:11:18.683 "read": true, 00:11:18.683 "write": true, 00:11:18.683 "unmap": true, 00:11:18.683 "flush": true, 00:11:18.683 "reset": true, 00:11:18.683 "nvme_admin": false, 00:11:18.683 "nvme_io": false, 00:11:18.683 "nvme_io_md": false, 00:11:18.683 "write_zeroes": true, 00:11:18.683 "zcopy": true, 00:11:18.683 "get_zone_info": false, 00:11:18.683 "zone_management": false, 00:11:18.683 "zone_append": false, 00:11:18.683 "compare": false, 00:11:18.683 "compare_and_write": false, 00:11:18.683 "abort": true, 00:11:18.683 "seek_hole": false, 00:11:18.683 "seek_data": false, 00:11:18.683 "copy": true, 00:11:18.683 "nvme_iov_md": false 00:11:18.683 }, 00:11:18.683 "memory_domains": [ 00:11:18.683 { 00:11:18.683 "dma_device_id": "system", 00:11:18.683 "dma_device_type": 1 00:11:18.683 }, 00:11:18.683 { 00:11:18.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.683 "dma_device_type": 2 00:11:18.683 } 00:11:18.683 ], 00:11:18.683 "driver_specific": {} 00:11:18.683 } 00:11:18.683 ] 00:11:18.683 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:18.683 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:18.683 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:18.683 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:18.943 [2024-07-25 02:36:05.621258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.943 [2024-07-25 02:36:05.621297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.943 [2024-07-25 02:36:05.621303] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.943 [2024-07-25 02:36:05.621721] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.943 [2024-07-25 02:36:05.621737] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.943 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.943 "name": "Existed_Raid", 00:11:18.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.943 "strip_size_kb": 64, 00:11:18.943 "state": "configuring", 00:11:18.944 "raid_level": "raid0", 00:11:18.944 "superblock": false, 00:11:18.944 "num_base_bdevs": 4, 00:11:18.944 "num_base_bdevs_discovered": 3, 00:11:18.944 "num_base_bdevs_operational": 4, 00:11:18.944 "base_bdevs_list": [ 00:11:18.944 { 00:11:18.944 "name": "BaseBdev1", 00:11:18.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.944 "is_configured": false, 00:11:18.944 "data_offset": 0, 00:11:18.944 "data_size": 0 00:11:18.944 }, 00:11:18.944 { 00:11:18.944 "name": "BaseBdev2", 00:11:18.944 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:18.944 "is_configured": true, 00:11:18.944 "data_offset": 0, 00:11:18.944 "data_size": 65536 00:11:18.944 }, 00:11:18.944 { 00:11:18.944 "name": "BaseBdev3", 00:11:18.944 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:18.944 "is_configured": true, 00:11:18.944 "data_offset": 0, 00:11:18.944 "data_size": 65536 00:11:18.944 }, 00:11:18.944 { 00:11:18.944 "name": "BaseBdev4", 00:11:18.944 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:18.944 "is_configured": true, 00:11:18.944 "data_offset": 0, 00:11:18.944 "data_size": 65536 00:11:18.944 } 00:11:18.944 ] 00:11:18.944 }' 00:11:18.944 02:36:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.944 02:36:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.202 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:19.461 [2024-07-25 02:36:06.249249] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.461 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.719 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.719 "name": "Existed_Raid", 00:11:19.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.719 "strip_size_kb": 64, 00:11:19.719 "state": "configuring", 00:11:19.719 "raid_level": "raid0", 00:11:19.719 "superblock": false, 00:11:19.719 "num_base_bdevs": 4, 00:11:19.719 "num_base_bdevs_discovered": 2, 00:11:19.719 "num_base_bdevs_operational": 4, 00:11:19.719 "base_bdevs_list": [ 00:11:19.719 { 00:11:19.719 "name": "BaseBdev1", 00:11:19.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.719 "is_configured": false, 00:11:19.719 "data_offset": 0, 00:11:19.719 "data_size": 0 00:11:19.719 }, 00:11:19.719 { 00:11:19.719 "name": null, 00:11:19.719 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:19.719 "is_configured": false, 00:11:19.719 "data_offset": 0, 00:11:19.719 "data_size": 65536 00:11:19.719 }, 00:11:19.719 { 00:11:19.719 "name": "BaseBdev3", 00:11:19.719 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:19.719 "is_configured": true, 00:11:19.719 "data_offset": 0, 00:11:19.719 "data_size": 65536 00:11:19.719 }, 00:11:19.719 { 00:11:19.719 "name": "BaseBdev4", 00:11:19.719 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:19.719 "is_configured": true, 00:11:19.719 "data_offset": 0, 00:11:19.720 "data_size": 65536 00:11:19.720 } 00:11:19.720 ] 00:11:19.720 }' 00:11:19.720 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.720 02:36:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.978 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.978 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.978 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:19.978 02:36:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.238 [2024-07-25 02:36:07.037342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.238 BaseBdev1 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:20.238 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:20.496 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.496 [ 00:11:20.496 { 00:11:20.496 "name": "BaseBdev1", 00:11:20.496 "aliases": [ 00:11:20.496 "a51574f9-4a2e-11ef-9c8e-7947904e2597" 00:11:20.496 ], 00:11:20.496 "product_name": "Malloc disk", 00:11:20.496 "block_size": 512, 00:11:20.496 "num_blocks": 65536, 00:11:20.496 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:20.496 "assigned_rate_limits": { 00:11:20.496 "rw_ios_per_sec": 0, 00:11:20.496 "rw_mbytes_per_sec": 0, 00:11:20.496 "r_mbytes_per_sec": 0, 00:11:20.496 "w_mbytes_per_sec": 0 00:11:20.496 }, 00:11:20.496 "claimed": true, 00:11:20.496 "claim_type": "exclusive_write", 00:11:20.496 "zoned": false, 00:11:20.496 "supported_io_types": { 00:11:20.496 "read": true, 00:11:20.496 "write": true, 00:11:20.497 "unmap": true, 00:11:20.497 "flush": true, 00:11:20.497 "reset": true, 00:11:20.497 "nvme_admin": false, 00:11:20.497 "nvme_io": false, 00:11:20.497 "nvme_io_md": false, 00:11:20.497 "write_zeroes": true, 00:11:20.497 "zcopy": true, 00:11:20.497 "get_zone_info": false, 00:11:20.497 "zone_management": false, 00:11:20.497 "zone_append": false, 00:11:20.497 "compare": false, 00:11:20.497 "compare_and_write": false, 00:11:20.497 "abort": true, 00:11:20.497 "seek_hole": false, 00:11:20.497 "seek_data": false, 00:11:20.497 "copy": true, 00:11:20.497 "nvme_iov_md": false 00:11:20.497 }, 00:11:20.497 "memory_domains": [ 00:11:20.497 { 00:11:20.497 "dma_device_id": "system", 00:11:20.497 "dma_device_type": 1 00:11:20.497 }, 00:11:20.497 { 00:11:20.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.497 "dma_device_type": 2 00:11:20.497 } 00:11:20.497 ], 00:11:20.497 "driver_specific": {} 00:11:20.497 } 00:11:20.497 ] 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.755 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.755 "name": "Existed_Raid", 00:11:20.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.755 "strip_size_kb": 64, 00:11:20.755 "state": "configuring", 00:11:20.755 "raid_level": "raid0", 00:11:20.755 "superblock": false, 00:11:20.755 "num_base_bdevs": 4, 00:11:20.755 "num_base_bdevs_discovered": 3, 00:11:20.755 "num_base_bdevs_operational": 4, 00:11:20.755 "base_bdevs_list": [ 00:11:20.755 { 00:11:20.755 "name": "BaseBdev1", 00:11:20.755 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:20.755 "is_configured": true, 00:11:20.755 "data_offset": 0, 00:11:20.755 "data_size": 65536 00:11:20.755 }, 00:11:20.755 { 00:11:20.755 "name": null, 00:11:20.756 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:20.756 "is_configured": false, 00:11:20.756 "data_offset": 0, 00:11:20.756 "data_size": 65536 00:11:20.756 }, 00:11:20.756 { 00:11:20.756 "name": "BaseBdev3", 00:11:20.756 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:20.756 "is_configured": true, 00:11:20.756 "data_offset": 0, 00:11:20.756 "data_size": 65536 00:11:20.756 }, 00:11:20.756 { 00:11:20.756 "name": "BaseBdev4", 00:11:20.756 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:20.756 "is_configured": true, 00:11:20.756 "data_offset": 0, 00:11:20.756 "data_size": 65536 00:11:20.756 } 00:11:20.756 ] 00:11:20.756 }' 00:11:20.756 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.756 02:36:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.014 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.014 02:36:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.273 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:21.273 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:21.531 [2024-07-25 02:36:08.181258] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.531 "name": "Existed_Raid", 00:11:21.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.531 "strip_size_kb": 64, 00:11:21.531 "state": "configuring", 00:11:21.531 "raid_level": "raid0", 00:11:21.531 "superblock": false, 00:11:21.531 "num_base_bdevs": 4, 00:11:21.531 "num_base_bdevs_discovered": 2, 00:11:21.531 "num_base_bdevs_operational": 4, 00:11:21.531 "base_bdevs_list": [ 00:11:21.531 { 00:11:21.531 "name": "BaseBdev1", 00:11:21.531 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:21.531 "is_configured": true, 00:11:21.531 "data_offset": 0, 00:11:21.531 "data_size": 65536 00:11:21.531 }, 00:11:21.531 { 00:11:21.531 "name": null, 00:11:21.531 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:21.531 "is_configured": false, 00:11:21.531 "data_offset": 0, 00:11:21.531 "data_size": 65536 00:11:21.531 }, 00:11:21.531 { 00:11:21.531 "name": null, 00:11:21.531 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:21.531 "is_configured": false, 00:11:21.531 "data_offset": 0, 00:11:21.531 "data_size": 65536 00:11:21.531 }, 00:11:21.531 { 00:11:21.531 "name": "BaseBdev4", 00:11:21.531 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:21.531 "is_configured": true, 00:11:21.531 "data_offset": 0, 00:11:21.531 "data_size": 65536 00:11:21.531 } 00:11:21.531 ] 00:11:21.531 }' 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.531 02:36:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.790 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.790 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.049 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:22.049 02:36:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.309 [2024-07-25 02:36:08.997266] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.309 "name": "Existed_Raid", 00:11:22.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.309 "strip_size_kb": 64, 00:11:22.309 "state": "configuring", 00:11:22.309 "raid_level": "raid0", 00:11:22.309 "superblock": false, 00:11:22.309 "num_base_bdevs": 4, 00:11:22.309 "num_base_bdevs_discovered": 3, 00:11:22.309 "num_base_bdevs_operational": 4, 00:11:22.309 "base_bdevs_list": [ 00:11:22.309 { 00:11:22.309 "name": "BaseBdev1", 00:11:22.309 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:22.309 "is_configured": true, 00:11:22.309 "data_offset": 0, 00:11:22.309 "data_size": 65536 00:11:22.309 }, 00:11:22.309 { 00:11:22.309 "name": null, 00:11:22.309 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:22.309 "is_configured": false, 00:11:22.309 "data_offset": 0, 00:11:22.309 "data_size": 65536 00:11:22.309 }, 00:11:22.309 { 00:11:22.309 "name": "BaseBdev3", 00:11:22.309 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:22.309 "is_configured": true, 00:11:22.309 "data_offset": 0, 00:11:22.309 "data_size": 65536 00:11:22.309 }, 00:11:22.309 { 00:11:22.309 "name": "BaseBdev4", 00:11:22.309 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:22.309 "is_configured": true, 00:11:22.309 "data_offset": 0, 00:11:22.309 "data_size": 65536 00:11:22.309 } 00:11:22.309 ] 00:11:22.309 }' 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.309 02:36:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.569 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.569 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.828 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:22.828 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:23.087 [2024-07-25 02:36:09.817279] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.087 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.087 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.088 02:36:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.347 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.347 "name": "Existed_Raid", 00:11:23.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.347 "strip_size_kb": 64, 00:11:23.347 "state": "configuring", 00:11:23.347 "raid_level": "raid0", 00:11:23.347 "superblock": false, 00:11:23.347 "num_base_bdevs": 4, 00:11:23.347 "num_base_bdevs_discovered": 2, 00:11:23.347 "num_base_bdevs_operational": 4, 00:11:23.347 "base_bdevs_list": [ 00:11:23.347 { 00:11:23.347 "name": null, 00:11:23.347 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:23.347 "is_configured": false, 00:11:23.347 "data_offset": 0, 00:11:23.347 "data_size": 65536 00:11:23.347 }, 00:11:23.347 { 00:11:23.347 "name": null, 00:11:23.347 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:23.347 "is_configured": false, 00:11:23.347 "data_offset": 0, 00:11:23.347 "data_size": 65536 00:11:23.347 }, 00:11:23.347 { 00:11:23.347 "name": "BaseBdev3", 00:11:23.347 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:23.347 "is_configured": true, 00:11:23.347 "data_offset": 0, 00:11:23.347 "data_size": 65536 00:11:23.347 }, 00:11:23.347 { 00:11:23.347 "name": "BaseBdev4", 00:11:23.347 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:23.347 "is_configured": true, 00:11:23.347 "data_offset": 0, 00:11:23.347 "data_size": 65536 00:11:23.347 } 00:11:23.347 ] 00:11:23.347 }' 00:11:23.347 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.347 02:36:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.607 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.607 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.607 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:23.607 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:23.866 [2024-07-25 02:36:10.617921] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.866 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.125 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.125 "name": "Existed_Raid", 00:11:24.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.125 "strip_size_kb": 64, 00:11:24.125 "state": "configuring", 00:11:24.125 "raid_level": "raid0", 00:11:24.125 "superblock": false, 00:11:24.125 "num_base_bdevs": 4, 00:11:24.125 "num_base_bdevs_discovered": 3, 00:11:24.125 "num_base_bdevs_operational": 4, 00:11:24.125 "base_bdevs_list": [ 00:11:24.125 { 00:11:24.125 "name": null, 00:11:24.125 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:24.125 "is_configured": false, 00:11:24.125 "data_offset": 0, 00:11:24.125 "data_size": 65536 00:11:24.125 }, 00:11:24.125 { 00:11:24.125 "name": "BaseBdev2", 00:11:24.125 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:24.125 "is_configured": true, 00:11:24.125 "data_offset": 0, 00:11:24.125 "data_size": 65536 00:11:24.125 }, 00:11:24.125 { 00:11:24.125 "name": "BaseBdev3", 00:11:24.125 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:24.125 "is_configured": true, 00:11:24.125 "data_offset": 0, 00:11:24.125 "data_size": 65536 00:11:24.125 }, 00:11:24.125 { 00:11:24.125 "name": "BaseBdev4", 00:11:24.125 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:24.125 "is_configured": true, 00:11:24.125 "data_offset": 0, 00:11:24.125 "data_size": 65536 00:11:24.125 } 00:11:24.125 ] 00:11:24.125 }' 00:11:24.125 02:36:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.125 02:36:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.384 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.384 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:24.384 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:24.384 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:24.384 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.644 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a51574f9-4a2e-11ef-9c8e-7947904e2597 00:11:24.903 [2024-07-25 02:36:11.578011] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:24.903 [2024-07-25 02:36:11.578027] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x311af3434f00 00:11:24.903 [2024-07-25 02:36:11.578030] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:24.903 [2024-07-25 02:36:11.578048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x311af3497e20 00:11:24.903 [2024-07-25 02:36:11.578096] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x311af3434f00 00:11:24.903 [2024-07-25 02:36:11.578099] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x311af3434f00 00:11:24.903 [2024-07-25 02:36:11.578122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.903 NewBaseBdev 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:24.903 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:24.904 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:25.163 [ 00:11:25.163 { 00:11:25.163 "name": "NewBaseBdev", 00:11:25.163 "aliases": [ 00:11:25.163 "a51574f9-4a2e-11ef-9c8e-7947904e2597" 00:11:25.163 ], 00:11:25.163 "product_name": "Malloc disk", 00:11:25.163 "block_size": 512, 00:11:25.163 "num_blocks": 65536, 00:11:25.163 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:25.163 "assigned_rate_limits": { 00:11:25.163 "rw_ios_per_sec": 0, 00:11:25.163 "rw_mbytes_per_sec": 0, 00:11:25.163 "r_mbytes_per_sec": 0, 00:11:25.163 "w_mbytes_per_sec": 0 00:11:25.163 }, 00:11:25.163 "claimed": true, 00:11:25.163 "claim_type": "exclusive_write", 00:11:25.163 "zoned": false, 00:11:25.163 "supported_io_types": { 00:11:25.163 "read": true, 00:11:25.163 "write": true, 00:11:25.163 "unmap": true, 00:11:25.163 "flush": true, 00:11:25.163 "reset": true, 00:11:25.163 "nvme_admin": false, 00:11:25.163 "nvme_io": false, 00:11:25.163 "nvme_io_md": false, 00:11:25.163 "write_zeroes": true, 00:11:25.163 "zcopy": true, 00:11:25.163 "get_zone_info": false, 00:11:25.163 "zone_management": false, 00:11:25.163 "zone_append": false, 00:11:25.163 "compare": false, 00:11:25.163 "compare_and_write": false, 00:11:25.163 "abort": true, 00:11:25.163 "seek_hole": false, 00:11:25.163 "seek_data": false, 00:11:25.163 "copy": true, 00:11:25.163 "nvme_iov_md": false 00:11:25.163 }, 00:11:25.163 "memory_domains": [ 00:11:25.163 { 00:11:25.163 "dma_device_id": "system", 00:11:25.163 "dma_device_type": 1 00:11:25.163 }, 00:11:25.163 { 00:11:25.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.163 "dma_device_type": 2 00:11:25.163 } 00:11:25.163 ], 00:11:25.163 "driver_specific": {} 00:11:25.163 } 00:11:25.163 ] 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.163 02:36:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.422 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.422 "name": "Existed_Raid", 00:11:25.422 "uuid": "a7ca52fb-4a2e-11ef-9c8e-7947904e2597", 00:11:25.422 "strip_size_kb": 64, 00:11:25.422 "state": "online", 00:11:25.422 "raid_level": "raid0", 00:11:25.422 "superblock": false, 00:11:25.422 "num_base_bdevs": 4, 00:11:25.422 "num_base_bdevs_discovered": 4, 00:11:25.422 "num_base_bdevs_operational": 4, 00:11:25.422 "base_bdevs_list": [ 00:11:25.422 { 00:11:25.422 "name": "NewBaseBdev", 00:11:25.422 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:25.422 "is_configured": true, 00:11:25.422 "data_offset": 0, 00:11:25.422 "data_size": 65536 00:11:25.422 }, 00:11:25.422 { 00:11:25.422 "name": "BaseBdev2", 00:11:25.422 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:25.422 "is_configured": true, 00:11:25.422 "data_offset": 0, 00:11:25.422 "data_size": 65536 00:11:25.422 }, 00:11:25.422 { 00:11:25.422 "name": "BaseBdev3", 00:11:25.422 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:25.422 "is_configured": true, 00:11:25.422 "data_offset": 0, 00:11:25.422 "data_size": 65536 00:11:25.422 }, 00:11:25.422 { 00:11:25.422 "name": "BaseBdev4", 00:11:25.422 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:25.422 "is_configured": true, 00:11:25.422 "data_offset": 0, 00:11:25.422 "data_size": 65536 00:11:25.422 } 00:11:25.422 ] 00:11:25.422 }' 00:11:25.422 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.422 02:36:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:25.682 [2024-07-25 02:36:12.565972] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:25.682 "name": "Existed_Raid", 00:11:25.682 "aliases": [ 00:11:25.682 "a7ca52fb-4a2e-11ef-9c8e-7947904e2597" 00:11:25.682 ], 00:11:25.682 "product_name": "Raid Volume", 00:11:25.682 "block_size": 512, 00:11:25.682 "num_blocks": 262144, 00:11:25.682 "uuid": "a7ca52fb-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "assigned_rate_limits": { 00:11:25.682 "rw_ios_per_sec": 0, 00:11:25.682 "rw_mbytes_per_sec": 0, 00:11:25.682 "r_mbytes_per_sec": 0, 00:11:25.682 "w_mbytes_per_sec": 0 00:11:25.682 }, 00:11:25.682 "claimed": false, 00:11:25.682 "zoned": false, 00:11:25.682 "supported_io_types": { 00:11:25.682 "read": true, 00:11:25.682 "write": true, 00:11:25.682 "unmap": true, 00:11:25.682 "flush": true, 00:11:25.682 "reset": true, 00:11:25.682 "nvme_admin": false, 00:11:25.682 "nvme_io": false, 00:11:25.682 "nvme_io_md": false, 00:11:25.682 "write_zeroes": true, 00:11:25.682 "zcopy": false, 00:11:25.682 "get_zone_info": false, 00:11:25.682 "zone_management": false, 00:11:25.682 "zone_append": false, 00:11:25.682 "compare": false, 00:11:25.682 "compare_and_write": false, 00:11:25.682 "abort": false, 00:11:25.682 "seek_hole": false, 00:11:25.682 "seek_data": false, 00:11:25.682 "copy": false, 00:11:25.682 "nvme_iov_md": false 00:11:25.682 }, 00:11:25.682 "memory_domains": [ 00:11:25.682 { 00:11:25.682 "dma_device_id": "system", 00:11:25.682 "dma_device_type": 1 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.682 "dma_device_type": 2 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "system", 00:11:25.682 "dma_device_type": 1 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.682 "dma_device_type": 2 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "system", 00:11:25.682 "dma_device_type": 1 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.682 "dma_device_type": 2 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "system", 00:11:25.682 "dma_device_type": 1 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.682 "dma_device_type": 2 00:11:25.682 } 00:11:25.682 ], 00:11:25.682 "driver_specific": { 00:11:25.682 "raid": { 00:11:25.682 "uuid": "a7ca52fb-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "strip_size_kb": 64, 00:11:25.682 "state": "online", 00:11:25.682 "raid_level": "raid0", 00:11:25.682 "superblock": false, 00:11:25.682 "num_base_bdevs": 4, 00:11:25.682 "num_base_bdevs_discovered": 4, 00:11:25.682 "num_base_bdevs_operational": 4, 00:11:25.682 "base_bdevs_list": [ 00:11:25.682 { 00:11:25.682 "name": "NewBaseBdev", 00:11:25.682 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "is_configured": true, 00:11:25.682 "data_offset": 0, 00:11:25.682 "data_size": 65536 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "name": "BaseBdev2", 00:11:25.682 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "is_configured": true, 00:11:25.682 "data_offset": 0, 00:11:25.682 "data_size": 65536 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "name": "BaseBdev3", 00:11:25.682 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "is_configured": true, 00:11:25.682 "data_offset": 0, 00:11:25.682 "data_size": 65536 00:11:25.682 }, 00:11:25.682 { 00:11:25.682 "name": "BaseBdev4", 00:11:25.682 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:25.682 "is_configured": true, 00:11:25.682 "data_offset": 0, 00:11:25.682 "data_size": 65536 00:11:25.682 } 00:11:25.682 ] 00:11:25.682 } 00:11:25.682 } 00:11:25.682 }' 00:11:25.682 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:25.942 BaseBdev2 00:11:25.942 BaseBdev3 00:11:25.942 BaseBdev4' 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:25.942 "name": "NewBaseBdev", 00:11:25.942 "aliases": [ 00:11:25.942 "a51574f9-4a2e-11ef-9c8e-7947904e2597" 00:11:25.942 ], 00:11:25.942 "product_name": "Malloc disk", 00:11:25.942 "block_size": 512, 00:11:25.942 "num_blocks": 65536, 00:11:25.942 "uuid": "a51574f9-4a2e-11ef-9c8e-7947904e2597", 00:11:25.942 "assigned_rate_limits": { 00:11:25.942 "rw_ios_per_sec": 0, 00:11:25.942 "rw_mbytes_per_sec": 0, 00:11:25.942 "r_mbytes_per_sec": 0, 00:11:25.942 "w_mbytes_per_sec": 0 00:11:25.942 }, 00:11:25.942 "claimed": true, 00:11:25.942 "claim_type": "exclusive_write", 00:11:25.942 "zoned": false, 00:11:25.942 "supported_io_types": { 00:11:25.942 "read": true, 00:11:25.942 "write": true, 00:11:25.942 "unmap": true, 00:11:25.942 "flush": true, 00:11:25.942 "reset": true, 00:11:25.942 "nvme_admin": false, 00:11:25.942 "nvme_io": false, 00:11:25.942 "nvme_io_md": false, 00:11:25.942 "write_zeroes": true, 00:11:25.942 "zcopy": true, 00:11:25.942 "get_zone_info": false, 00:11:25.942 "zone_management": false, 00:11:25.942 "zone_append": false, 00:11:25.942 "compare": false, 00:11:25.942 "compare_and_write": false, 00:11:25.942 "abort": true, 00:11:25.942 "seek_hole": false, 00:11:25.942 "seek_data": false, 00:11:25.942 "copy": true, 00:11:25.942 "nvme_iov_md": false 00:11:25.942 }, 00:11:25.942 "memory_domains": [ 00:11:25.942 { 00:11:25.942 "dma_device_id": "system", 00:11:25.942 "dma_device_type": 1 00:11:25.942 }, 00:11:25.942 { 00:11:25.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.942 "dma_device_type": 2 00:11:25.942 } 00:11:25.942 ], 00:11:25.942 "driver_specific": {} 00:11:25.942 }' 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:25.942 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.202 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:26.202 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:26.202 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:26.202 02:36:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:26.202 "name": "BaseBdev2", 00:11:26.202 "aliases": [ 00:11:26.202 "a34ebe31-4a2e-11ef-9c8e-7947904e2597" 00:11:26.202 ], 00:11:26.202 "product_name": "Malloc disk", 00:11:26.202 "block_size": 512, 00:11:26.202 "num_blocks": 65536, 00:11:26.202 "uuid": "a34ebe31-4a2e-11ef-9c8e-7947904e2597", 00:11:26.202 "assigned_rate_limits": { 00:11:26.202 "rw_ios_per_sec": 0, 00:11:26.202 "rw_mbytes_per_sec": 0, 00:11:26.202 "r_mbytes_per_sec": 0, 00:11:26.202 "w_mbytes_per_sec": 0 00:11:26.202 }, 00:11:26.202 "claimed": true, 00:11:26.202 "claim_type": "exclusive_write", 00:11:26.202 "zoned": false, 00:11:26.202 "supported_io_types": { 00:11:26.202 "read": true, 00:11:26.202 "write": true, 00:11:26.202 "unmap": true, 00:11:26.202 "flush": true, 00:11:26.202 "reset": true, 00:11:26.202 "nvme_admin": false, 00:11:26.202 "nvme_io": false, 00:11:26.202 "nvme_io_md": false, 00:11:26.202 "write_zeroes": true, 00:11:26.202 "zcopy": true, 00:11:26.202 "get_zone_info": false, 00:11:26.202 "zone_management": false, 00:11:26.202 "zone_append": false, 00:11:26.202 "compare": false, 00:11:26.202 "compare_and_write": false, 00:11:26.202 "abort": true, 00:11:26.202 "seek_hole": false, 00:11:26.202 "seek_data": false, 00:11:26.202 "copy": true, 00:11:26.202 "nvme_iov_md": false 00:11:26.202 }, 00:11:26.202 "memory_domains": [ 00:11:26.202 { 00:11:26.202 "dma_device_id": "system", 00:11:26.202 "dma_device_type": 1 00:11:26.202 }, 00:11:26.202 { 00:11:26.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.202 "dma_device_type": 2 00:11:26.202 } 00:11:26.202 ], 00:11:26.202 "driver_specific": {} 00:11:26.202 }' 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:26.202 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:26.462 "name": "BaseBdev3", 00:11:26.462 "aliases": [ 00:11:26.462 "a39eb331-4a2e-11ef-9c8e-7947904e2597" 00:11:26.462 ], 00:11:26.462 "product_name": "Malloc disk", 00:11:26.462 "block_size": 512, 00:11:26.462 "num_blocks": 65536, 00:11:26.462 "uuid": "a39eb331-4a2e-11ef-9c8e-7947904e2597", 00:11:26.462 "assigned_rate_limits": { 00:11:26.462 "rw_ios_per_sec": 0, 00:11:26.462 "rw_mbytes_per_sec": 0, 00:11:26.462 "r_mbytes_per_sec": 0, 00:11:26.462 "w_mbytes_per_sec": 0 00:11:26.462 }, 00:11:26.462 "claimed": true, 00:11:26.462 "claim_type": "exclusive_write", 00:11:26.462 "zoned": false, 00:11:26.462 "supported_io_types": { 00:11:26.462 "read": true, 00:11:26.462 "write": true, 00:11:26.462 "unmap": true, 00:11:26.462 "flush": true, 00:11:26.462 "reset": true, 00:11:26.462 "nvme_admin": false, 00:11:26.462 "nvme_io": false, 00:11:26.462 "nvme_io_md": false, 00:11:26.462 "write_zeroes": true, 00:11:26.462 "zcopy": true, 00:11:26.462 "get_zone_info": false, 00:11:26.462 "zone_management": false, 00:11:26.462 "zone_append": false, 00:11:26.462 "compare": false, 00:11:26.462 "compare_and_write": false, 00:11:26.462 "abort": true, 00:11:26.462 "seek_hole": false, 00:11:26.462 "seek_data": false, 00:11:26.462 "copy": true, 00:11:26.462 "nvme_iov_md": false 00:11:26.462 }, 00:11:26.462 "memory_domains": [ 00:11:26.462 { 00:11:26.462 "dma_device_id": "system", 00:11:26.462 "dma_device_type": 1 00:11:26.462 }, 00:11:26.462 { 00:11:26.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.462 "dma_device_type": 2 00:11:26.462 } 00:11:26.462 ], 00:11:26.462 "driver_specific": {} 00:11:26.462 }' 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.462 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:26.721 "name": "BaseBdev4", 00:11:26.721 "aliases": [ 00:11:26.721 "a3ed6f39-4a2e-11ef-9c8e-7947904e2597" 00:11:26.721 ], 00:11:26.721 "product_name": "Malloc disk", 00:11:26.721 "block_size": 512, 00:11:26.721 "num_blocks": 65536, 00:11:26.721 "uuid": "a3ed6f39-4a2e-11ef-9c8e-7947904e2597", 00:11:26.721 "assigned_rate_limits": { 00:11:26.721 "rw_ios_per_sec": 0, 00:11:26.721 "rw_mbytes_per_sec": 0, 00:11:26.721 "r_mbytes_per_sec": 0, 00:11:26.721 "w_mbytes_per_sec": 0 00:11:26.721 }, 00:11:26.721 "claimed": true, 00:11:26.721 "claim_type": "exclusive_write", 00:11:26.721 "zoned": false, 00:11:26.721 "supported_io_types": { 00:11:26.721 "read": true, 00:11:26.721 "write": true, 00:11:26.721 "unmap": true, 00:11:26.721 "flush": true, 00:11:26.721 "reset": true, 00:11:26.721 "nvme_admin": false, 00:11:26.721 "nvme_io": false, 00:11:26.721 "nvme_io_md": false, 00:11:26.721 "write_zeroes": true, 00:11:26.721 "zcopy": true, 00:11:26.721 "get_zone_info": false, 00:11:26.721 "zone_management": false, 00:11:26.721 "zone_append": false, 00:11:26.721 "compare": false, 00:11:26.721 "compare_and_write": false, 00:11:26.721 "abort": true, 00:11:26.721 "seek_hole": false, 00:11:26.721 "seek_data": false, 00:11:26.721 "copy": true, 00:11:26.721 "nvme_iov_md": false 00:11:26.721 }, 00:11:26.721 "memory_domains": [ 00:11:26.721 { 00:11:26.721 "dma_device_id": "system", 00:11:26.721 "dma_device_type": 1 00:11:26.721 }, 00:11:26.721 { 00:11:26.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.721 "dma_device_type": 2 00:11:26.721 } 00:11:26.721 ], 00:11:26.721 "driver_specific": {} 00:11:26.721 }' 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:26.721 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.980 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.981 [2024-07-25 02:36:13.837976] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.981 [2024-07-25 02:36:13.837990] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.981 [2024-07-25 02:36:13.838003] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.981 [2024-07-25 02:36:13.838015] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.981 [2024-07-25 02:36:13.838018] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x311af3434f00 name Existed_Raid, state offline 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58065 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58065 ']' 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58065 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58065 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:26.981 killing process with pid 58065 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58065' 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58065 00:11:26.981 [2024-07-25 02:36:13.866904] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.981 02:36:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58065 00:11:26.981 [2024-07-25 02:36:13.885818] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:27.240 00:11:27.240 real 0m20.058s 00:11:27.240 user 0m35.945s 00:11:27.240 sys 0m3.521s 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 ************************************ 00:11:27.240 END TEST raid_state_function_test 00:11:27.240 ************************************ 00:11:27.240 02:36:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:27.240 02:36:14 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:27.240 02:36:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:27.240 02:36:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.240 02:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 ************************************ 00:11:27.240 START TEST raid_state_function_test_sb 00:11:27.240 ************************************ 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=58856 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58856' 00:11:27.240 Process raid pid: 58856 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 58856 /var/tmp/spdk-raid.sock 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 58856 ']' 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.240 02:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.240 [2024-07-25 02:36:14.133985] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:11:27.240 [2024-07-25 02:36:14.134218] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:27.809 EAL: TSC is not safe to use in SMP mode 00:11:27.809 EAL: TSC is not invariant 00:11:27.809 [2024-07-25 02:36:14.554475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.809 [2024-07-25 02:36:14.645249] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:27.809 [2024-07-25 02:36:14.646934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.809 [2024-07-25 02:36:14.647517] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.809 [2024-07-25 02:36:14.647528] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:28.378 [2024-07-25 02:36:15.182384] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.378 [2024-07-25 02:36:15.182420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.378 [2024-07-25 02:36:15.182424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.378 [2024-07-25 02:36:15.182429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.378 [2024-07-25 02:36:15.182432] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.378 [2024-07-25 02:36:15.182437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.378 [2024-07-25 02:36:15.182439] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.378 [2024-07-25 02:36:15.182460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.378 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.638 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.638 "name": "Existed_Raid", 00:11:28.638 "uuid": "a9f04db5-4a2e-11ef-9c8e-7947904e2597", 00:11:28.638 "strip_size_kb": 64, 00:11:28.639 "state": "configuring", 00:11:28.639 "raid_level": "raid0", 00:11:28.639 "superblock": true, 00:11:28.639 "num_base_bdevs": 4, 00:11:28.639 "num_base_bdevs_discovered": 0, 00:11:28.639 "num_base_bdevs_operational": 4, 00:11:28.639 "base_bdevs_list": [ 00:11:28.639 { 00:11:28.639 "name": "BaseBdev1", 00:11:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.639 "is_configured": false, 00:11:28.639 "data_offset": 0, 00:11:28.639 "data_size": 0 00:11:28.639 }, 00:11:28.639 { 00:11:28.639 "name": "BaseBdev2", 00:11:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.639 "is_configured": false, 00:11:28.639 "data_offset": 0, 00:11:28.639 "data_size": 0 00:11:28.639 }, 00:11:28.639 { 00:11:28.639 "name": "BaseBdev3", 00:11:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.639 "is_configured": false, 00:11:28.639 "data_offset": 0, 00:11:28.639 "data_size": 0 00:11:28.639 }, 00:11:28.639 { 00:11:28.639 "name": "BaseBdev4", 00:11:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.639 "is_configured": false, 00:11:28.639 "data_offset": 0, 00:11:28.639 "data_size": 0 00:11:28.639 } 00:11:28.639 ] 00:11:28.639 }' 00:11:28.639 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.639 02:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.899 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.159 [2024-07-25 02:36:15.818389] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.159 [2024-07-25 02:36:15.818404] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec079e34500 name Existed_Raid, state configuring 00:11:29.159 02:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:29.159 [2024-07-25 02:36:15.998401] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.159 [2024-07-25 02:36:15.998427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.159 [2024-07-25 02:36:15.998430] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.159 [2024-07-25 02:36:15.998436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.159 [2024-07-25 02:36:15.998438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.159 [2024-07-25 02:36:15.998443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.159 [2024-07-25 02:36:15.998446] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:29.159 [2024-07-25 02:36:15.998451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:29.159 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.419 [2024-07-25 02:36:16.155169] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.419 BaseBdev1 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:29.419 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.679 [ 00:11:29.679 { 00:11:29.679 "name": "BaseBdev1", 00:11:29.679 "aliases": [ 00:11:29.679 "aa849fa1-4a2e-11ef-9c8e-7947904e2597" 00:11:29.679 ], 00:11:29.679 "product_name": "Malloc disk", 00:11:29.679 "block_size": 512, 00:11:29.679 "num_blocks": 65536, 00:11:29.679 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:29.679 "assigned_rate_limits": { 00:11:29.679 "rw_ios_per_sec": 0, 00:11:29.679 "rw_mbytes_per_sec": 0, 00:11:29.679 "r_mbytes_per_sec": 0, 00:11:29.679 "w_mbytes_per_sec": 0 00:11:29.679 }, 00:11:29.679 "claimed": true, 00:11:29.679 "claim_type": "exclusive_write", 00:11:29.679 "zoned": false, 00:11:29.679 "supported_io_types": { 00:11:29.679 "read": true, 00:11:29.679 "write": true, 00:11:29.679 "unmap": true, 00:11:29.679 "flush": true, 00:11:29.679 "reset": true, 00:11:29.679 "nvme_admin": false, 00:11:29.679 "nvme_io": false, 00:11:29.679 "nvme_io_md": false, 00:11:29.679 "write_zeroes": true, 00:11:29.679 "zcopy": true, 00:11:29.679 "get_zone_info": false, 00:11:29.679 "zone_management": false, 00:11:29.679 "zone_append": false, 00:11:29.679 "compare": false, 00:11:29.679 "compare_and_write": false, 00:11:29.679 "abort": true, 00:11:29.679 "seek_hole": false, 00:11:29.679 "seek_data": false, 00:11:29.679 "copy": true, 00:11:29.679 "nvme_iov_md": false 00:11:29.679 }, 00:11:29.679 "memory_domains": [ 00:11:29.679 { 00:11:29.679 "dma_device_id": "system", 00:11:29.679 "dma_device_type": 1 00:11:29.679 }, 00:11:29.679 { 00:11:29.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.679 "dma_device_type": 2 00:11:29.679 } 00:11:29.679 ], 00:11:29.679 "driver_specific": {} 00:11:29.679 } 00:11:29.679 ] 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.679 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.939 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.939 "name": "Existed_Raid", 00:11:29.939 "uuid": "aa6cd17b-4a2e-11ef-9c8e-7947904e2597", 00:11:29.939 "strip_size_kb": 64, 00:11:29.939 "state": "configuring", 00:11:29.939 "raid_level": "raid0", 00:11:29.939 "superblock": true, 00:11:29.939 "num_base_bdevs": 4, 00:11:29.939 "num_base_bdevs_discovered": 1, 00:11:29.939 "num_base_bdevs_operational": 4, 00:11:29.939 "base_bdevs_list": [ 00:11:29.939 { 00:11:29.939 "name": "BaseBdev1", 00:11:29.939 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:29.939 "is_configured": true, 00:11:29.939 "data_offset": 2048, 00:11:29.939 "data_size": 63488 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "name": "BaseBdev2", 00:11:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.939 "is_configured": false, 00:11:29.939 "data_offset": 0, 00:11:29.939 "data_size": 0 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "name": "BaseBdev3", 00:11:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.940 "is_configured": false, 00:11:29.940 "data_offset": 0, 00:11:29.940 "data_size": 0 00:11:29.940 }, 00:11:29.940 { 00:11:29.940 "name": "BaseBdev4", 00:11:29.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.940 "is_configured": false, 00:11:29.940 "data_offset": 0, 00:11:29.940 "data_size": 0 00:11:29.940 } 00:11:29.940 ] 00:11:29.940 }' 00:11:29.940 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.940 02:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.199 02:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:30.460 [2024-07-25 02:36:17.130486] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.460 [2024-07-25 02:36:17.130504] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec079e34500 name Existed_Raid, state configuring 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:30.460 [2024-07-25 02:36:17.306500] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.460 [2024-07-25 02:36:17.307119] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.460 [2024-07-25 02:36:17.307151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.460 [2024-07-25 02:36:17.307156] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.460 [2024-07-25 02:36:17.307161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.460 [2024-07-25 02:36:17.307164] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:30.460 [2024-07-25 02:36:17.307169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.460 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.720 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:30.720 "name": "Existed_Raid", 00:11:30.720 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:30.720 "strip_size_kb": 64, 00:11:30.720 "state": "configuring", 00:11:30.720 "raid_level": "raid0", 00:11:30.720 "superblock": true, 00:11:30.720 "num_base_bdevs": 4, 00:11:30.720 "num_base_bdevs_discovered": 1, 00:11:30.720 "num_base_bdevs_operational": 4, 00:11:30.720 "base_bdevs_list": [ 00:11:30.720 { 00:11:30.720 "name": "BaseBdev1", 00:11:30.720 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:30.720 "is_configured": true, 00:11:30.720 "data_offset": 2048, 00:11:30.720 "data_size": 63488 00:11:30.720 }, 00:11:30.720 { 00:11:30.720 "name": "BaseBdev2", 00:11:30.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.720 "is_configured": false, 00:11:30.720 "data_offset": 0, 00:11:30.720 "data_size": 0 00:11:30.720 }, 00:11:30.720 { 00:11:30.720 "name": "BaseBdev3", 00:11:30.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.720 "is_configured": false, 00:11:30.720 "data_offset": 0, 00:11:30.720 "data_size": 0 00:11:30.720 }, 00:11:30.720 { 00:11:30.720 "name": "BaseBdev4", 00:11:30.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.720 "is_configured": false, 00:11:30.720 "data_offset": 0, 00:11:30.720 "data_size": 0 00:11:30.720 } 00:11:30.720 ] 00:11:30.720 }' 00:11:30.720 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:30.720 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.980 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.242 [2024-07-25 02:36:17.946621] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.242 BaseBdev2 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:31.242 02:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.242 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.502 [ 00:11:31.502 { 00:11:31.502 "name": "BaseBdev2", 00:11:31.502 "aliases": [ 00:11:31.502 "ab96142a-4a2e-11ef-9c8e-7947904e2597" 00:11:31.502 ], 00:11:31.502 "product_name": "Malloc disk", 00:11:31.502 "block_size": 512, 00:11:31.502 "num_blocks": 65536, 00:11:31.502 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:31.502 "assigned_rate_limits": { 00:11:31.502 "rw_ios_per_sec": 0, 00:11:31.502 "rw_mbytes_per_sec": 0, 00:11:31.502 "r_mbytes_per_sec": 0, 00:11:31.502 "w_mbytes_per_sec": 0 00:11:31.502 }, 00:11:31.502 "claimed": true, 00:11:31.502 "claim_type": "exclusive_write", 00:11:31.502 "zoned": false, 00:11:31.502 "supported_io_types": { 00:11:31.502 "read": true, 00:11:31.502 "write": true, 00:11:31.502 "unmap": true, 00:11:31.502 "flush": true, 00:11:31.502 "reset": true, 00:11:31.502 "nvme_admin": false, 00:11:31.502 "nvme_io": false, 00:11:31.502 "nvme_io_md": false, 00:11:31.502 "write_zeroes": true, 00:11:31.502 "zcopy": true, 00:11:31.502 "get_zone_info": false, 00:11:31.502 "zone_management": false, 00:11:31.502 "zone_append": false, 00:11:31.502 "compare": false, 00:11:31.502 "compare_and_write": false, 00:11:31.502 "abort": true, 00:11:31.502 "seek_hole": false, 00:11:31.502 "seek_data": false, 00:11:31.502 "copy": true, 00:11:31.502 "nvme_iov_md": false 00:11:31.502 }, 00:11:31.502 "memory_domains": [ 00:11:31.502 { 00:11:31.502 "dma_device_id": "system", 00:11:31.502 "dma_device_type": 1 00:11:31.502 }, 00:11:31.502 { 00:11:31.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.503 "dma_device_type": 2 00:11:31.503 } 00:11:31.503 ], 00:11:31.503 "driver_specific": {} 00:11:31.503 } 00:11:31.503 ] 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.503 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.763 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.763 "name": "Existed_Raid", 00:11:31.763 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:31.763 "strip_size_kb": 64, 00:11:31.763 "state": "configuring", 00:11:31.763 "raid_level": "raid0", 00:11:31.763 "superblock": true, 00:11:31.763 "num_base_bdevs": 4, 00:11:31.763 "num_base_bdevs_discovered": 2, 00:11:31.763 "num_base_bdevs_operational": 4, 00:11:31.763 "base_bdevs_list": [ 00:11:31.763 { 00:11:31.763 "name": "BaseBdev1", 00:11:31.763 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:31.763 "is_configured": true, 00:11:31.763 "data_offset": 2048, 00:11:31.763 "data_size": 63488 00:11:31.763 }, 00:11:31.763 { 00:11:31.763 "name": "BaseBdev2", 00:11:31.763 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:31.763 "is_configured": true, 00:11:31.763 "data_offset": 2048, 00:11:31.763 "data_size": 63488 00:11:31.763 }, 00:11:31.763 { 00:11:31.763 "name": "BaseBdev3", 00:11:31.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.763 "is_configured": false, 00:11:31.763 "data_offset": 0, 00:11:31.763 "data_size": 0 00:11:31.763 }, 00:11:31.763 { 00:11:31.763 "name": "BaseBdev4", 00:11:31.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.763 "is_configured": false, 00:11:31.763 "data_offset": 0, 00:11:31.763 "data_size": 0 00:11:31.763 } 00:11:31.763 ] 00:11:31.763 }' 00:11:31.763 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.763 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:32.023 [2024-07-25 02:36:18.902639] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.023 BaseBdev3 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:32.023 02:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:32.284 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:32.544 [ 00:11:32.544 { 00:11:32.544 "name": "BaseBdev3", 00:11:32.544 "aliases": [ 00:11:32.544 "ac27f5a8-4a2e-11ef-9c8e-7947904e2597" 00:11:32.544 ], 00:11:32.544 "product_name": "Malloc disk", 00:11:32.544 "block_size": 512, 00:11:32.544 "num_blocks": 65536, 00:11:32.544 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:32.544 "assigned_rate_limits": { 00:11:32.544 "rw_ios_per_sec": 0, 00:11:32.544 "rw_mbytes_per_sec": 0, 00:11:32.544 "r_mbytes_per_sec": 0, 00:11:32.544 "w_mbytes_per_sec": 0 00:11:32.544 }, 00:11:32.544 "claimed": true, 00:11:32.544 "claim_type": "exclusive_write", 00:11:32.544 "zoned": false, 00:11:32.544 "supported_io_types": { 00:11:32.544 "read": true, 00:11:32.544 "write": true, 00:11:32.544 "unmap": true, 00:11:32.544 "flush": true, 00:11:32.544 "reset": true, 00:11:32.544 "nvme_admin": false, 00:11:32.544 "nvme_io": false, 00:11:32.544 "nvme_io_md": false, 00:11:32.544 "write_zeroes": true, 00:11:32.544 "zcopy": true, 00:11:32.544 "get_zone_info": false, 00:11:32.544 "zone_management": false, 00:11:32.544 "zone_append": false, 00:11:32.544 "compare": false, 00:11:32.544 "compare_and_write": false, 00:11:32.544 "abort": true, 00:11:32.544 "seek_hole": false, 00:11:32.544 "seek_data": false, 00:11:32.544 "copy": true, 00:11:32.544 "nvme_iov_md": false 00:11:32.544 }, 00:11:32.544 "memory_domains": [ 00:11:32.544 { 00:11:32.544 "dma_device_id": "system", 00:11:32.544 "dma_device_type": 1 00:11:32.544 }, 00:11:32.544 { 00:11:32.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.544 "dma_device_type": 2 00:11:32.544 } 00:11:32.544 ], 00:11:32.544 "driver_specific": {} 00:11:32.544 } 00:11:32.544 ] 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.544 "name": "Existed_Raid", 00:11:32.544 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:32.544 "strip_size_kb": 64, 00:11:32.544 "state": "configuring", 00:11:32.544 "raid_level": "raid0", 00:11:32.544 "superblock": true, 00:11:32.544 "num_base_bdevs": 4, 00:11:32.544 "num_base_bdevs_discovered": 3, 00:11:32.544 "num_base_bdevs_operational": 4, 00:11:32.544 "base_bdevs_list": [ 00:11:32.544 { 00:11:32.544 "name": "BaseBdev1", 00:11:32.544 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:32.544 "is_configured": true, 00:11:32.544 "data_offset": 2048, 00:11:32.544 "data_size": 63488 00:11:32.544 }, 00:11:32.544 { 00:11:32.544 "name": "BaseBdev2", 00:11:32.544 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:32.544 "is_configured": true, 00:11:32.544 "data_offset": 2048, 00:11:32.544 "data_size": 63488 00:11:32.544 }, 00:11:32.544 { 00:11:32.544 "name": "BaseBdev3", 00:11:32.544 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:32.544 "is_configured": true, 00:11:32.544 "data_offset": 2048, 00:11:32.544 "data_size": 63488 00:11:32.544 }, 00:11:32.544 { 00:11:32.544 "name": "BaseBdev4", 00:11:32.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.544 "is_configured": false, 00:11:32.544 "data_offset": 0, 00:11:32.544 "data_size": 0 00:11:32.544 } 00:11:32.544 ] 00:11:32.544 }' 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.544 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.115 [2024-07-25 02:36:19.882656] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.115 [2024-07-25 02:36:19.882701] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xec079e34a00 00:11:33.115 [2024-07-25 02:36:19.882705] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.115 [2024-07-25 02:36:19.882721] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xec079e97e20 00:11:33.115 [2024-07-25 02:36:19.882757] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xec079e34a00 00:11:33.115 [2024-07-25 02:36:19.882759] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xec079e34a00 00:11:33.115 [2024-07-25 02:36:19.882774] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.115 BaseBdev4 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:33.115 02:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.375 [ 00:11:33.375 { 00:11:33.375 "name": "BaseBdev4", 00:11:33.375 "aliases": [ 00:11:33.375 "acbd7fea-4a2e-11ef-9c8e-7947904e2597" 00:11:33.375 ], 00:11:33.375 "product_name": "Malloc disk", 00:11:33.375 "block_size": 512, 00:11:33.375 "num_blocks": 65536, 00:11:33.375 "uuid": "acbd7fea-4a2e-11ef-9c8e-7947904e2597", 00:11:33.375 "assigned_rate_limits": { 00:11:33.375 "rw_ios_per_sec": 0, 00:11:33.375 "rw_mbytes_per_sec": 0, 00:11:33.375 "r_mbytes_per_sec": 0, 00:11:33.375 "w_mbytes_per_sec": 0 00:11:33.375 }, 00:11:33.375 "claimed": true, 00:11:33.375 "claim_type": "exclusive_write", 00:11:33.375 "zoned": false, 00:11:33.375 "supported_io_types": { 00:11:33.375 "read": true, 00:11:33.375 "write": true, 00:11:33.375 "unmap": true, 00:11:33.375 "flush": true, 00:11:33.375 "reset": true, 00:11:33.375 "nvme_admin": false, 00:11:33.375 "nvme_io": false, 00:11:33.375 "nvme_io_md": false, 00:11:33.375 "write_zeroes": true, 00:11:33.375 "zcopy": true, 00:11:33.375 "get_zone_info": false, 00:11:33.375 "zone_management": false, 00:11:33.375 "zone_append": false, 00:11:33.375 "compare": false, 00:11:33.375 "compare_and_write": false, 00:11:33.375 "abort": true, 00:11:33.375 "seek_hole": false, 00:11:33.375 "seek_data": false, 00:11:33.375 "copy": true, 00:11:33.375 "nvme_iov_md": false 00:11:33.375 }, 00:11:33.375 "memory_domains": [ 00:11:33.375 { 00:11:33.375 "dma_device_id": "system", 00:11:33.375 "dma_device_type": 1 00:11:33.375 }, 00:11:33.375 { 00:11:33.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.375 "dma_device_type": 2 00:11:33.375 } 00:11:33.375 ], 00:11:33.375 "driver_specific": {} 00:11:33.375 } 00:11:33.375 ] 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.375 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.635 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.635 "name": "Existed_Raid", 00:11:33.635 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:33.635 "strip_size_kb": 64, 00:11:33.635 "state": "online", 00:11:33.635 "raid_level": "raid0", 00:11:33.635 "superblock": true, 00:11:33.635 "num_base_bdevs": 4, 00:11:33.635 "num_base_bdevs_discovered": 4, 00:11:33.635 "num_base_bdevs_operational": 4, 00:11:33.635 "base_bdevs_list": [ 00:11:33.635 { 00:11:33.635 "name": "BaseBdev1", 00:11:33.635 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:33.635 "is_configured": true, 00:11:33.635 "data_offset": 2048, 00:11:33.635 "data_size": 63488 00:11:33.635 }, 00:11:33.635 { 00:11:33.635 "name": "BaseBdev2", 00:11:33.635 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:33.635 "is_configured": true, 00:11:33.635 "data_offset": 2048, 00:11:33.635 "data_size": 63488 00:11:33.635 }, 00:11:33.635 { 00:11:33.635 "name": "BaseBdev3", 00:11:33.635 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:33.635 "is_configured": true, 00:11:33.635 "data_offset": 2048, 00:11:33.635 "data_size": 63488 00:11:33.635 }, 00:11:33.636 { 00:11:33.636 "name": "BaseBdev4", 00:11:33.636 "uuid": "acbd7fea-4a2e-11ef-9c8e-7947904e2597", 00:11:33.636 "is_configured": true, 00:11:33.636 "data_offset": 2048, 00:11:33.636 "data_size": 63488 00:11:33.636 } 00:11:33.636 ] 00:11:33.636 }' 00:11:33.636 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.636 02:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.895 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:33.895 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:33.895 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:33.896 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:33.896 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:33.896 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:33.896 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:33.896 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:34.156 [2024-07-25 02:36:20.854648] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:34.156 "name": "Existed_Raid", 00:11:34.156 "aliases": [ 00:11:34.156 "ab346b0b-4a2e-11ef-9c8e-7947904e2597" 00:11:34.156 ], 00:11:34.156 "product_name": "Raid Volume", 00:11:34.156 "block_size": 512, 00:11:34.156 "num_blocks": 253952, 00:11:34.156 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "assigned_rate_limits": { 00:11:34.156 "rw_ios_per_sec": 0, 00:11:34.156 "rw_mbytes_per_sec": 0, 00:11:34.156 "r_mbytes_per_sec": 0, 00:11:34.156 "w_mbytes_per_sec": 0 00:11:34.156 }, 00:11:34.156 "claimed": false, 00:11:34.156 "zoned": false, 00:11:34.156 "supported_io_types": { 00:11:34.156 "read": true, 00:11:34.156 "write": true, 00:11:34.156 "unmap": true, 00:11:34.156 "flush": true, 00:11:34.156 "reset": true, 00:11:34.156 "nvme_admin": false, 00:11:34.156 "nvme_io": false, 00:11:34.156 "nvme_io_md": false, 00:11:34.156 "write_zeroes": true, 00:11:34.156 "zcopy": false, 00:11:34.156 "get_zone_info": false, 00:11:34.156 "zone_management": false, 00:11:34.156 "zone_append": false, 00:11:34.156 "compare": false, 00:11:34.156 "compare_and_write": false, 00:11:34.156 "abort": false, 00:11:34.156 "seek_hole": false, 00:11:34.156 "seek_data": false, 00:11:34.156 "copy": false, 00:11:34.156 "nvme_iov_md": false 00:11:34.156 }, 00:11:34.156 "memory_domains": [ 00:11:34.156 { 00:11:34.156 "dma_device_id": "system", 00:11:34.156 "dma_device_type": 1 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.156 "dma_device_type": 2 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "system", 00:11:34.156 "dma_device_type": 1 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.156 "dma_device_type": 2 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "system", 00:11:34.156 "dma_device_type": 1 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.156 "dma_device_type": 2 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "system", 00:11:34.156 "dma_device_type": 1 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.156 "dma_device_type": 2 00:11:34.156 } 00:11:34.156 ], 00:11:34.156 "driver_specific": { 00:11:34.156 "raid": { 00:11:34.156 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "strip_size_kb": 64, 00:11:34.156 "state": "online", 00:11:34.156 "raid_level": "raid0", 00:11:34.156 "superblock": true, 00:11:34.156 "num_base_bdevs": 4, 00:11:34.156 "num_base_bdevs_discovered": 4, 00:11:34.156 "num_base_bdevs_operational": 4, 00:11:34.156 "base_bdevs_list": [ 00:11:34.156 { 00:11:34.156 "name": "BaseBdev1", 00:11:34.156 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "is_configured": true, 00:11:34.156 "data_offset": 2048, 00:11:34.156 "data_size": 63488 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "name": "BaseBdev2", 00:11:34.156 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "is_configured": true, 00:11:34.156 "data_offset": 2048, 00:11:34.156 "data_size": 63488 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "name": "BaseBdev3", 00:11:34.156 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "is_configured": true, 00:11:34.156 "data_offset": 2048, 00:11:34.156 "data_size": 63488 00:11:34.156 }, 00:11:34.156 { 00:11:34.156 "name": "BaseBdev4", 00:11:34.156 "uuid": "acbd7fea-4a2e-11ef-9c8e-7947904e2597", 00:11:34.156 "is_configured": true, 00:11:34.156 "data_offset": 2048, 00:11:34.156 "data_size": 63488 00:11:34.156 } 00:11:34.156 ] 00:11:34.156 } 00:11:34.156 } 00:11:34.156 }' 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:34.156 BaseBdev2 00:11:34.156 BaseBdev3 00:11:34.156 BaseBdev4' 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.156 02:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.417 "name": "BaseBdev1", 00:11:34.417 "aliases": [ 00:11:34.417 "aa849fa1-4a2e-11ef-9c8e-7947904e2597" 00:11:34.417 ], 00:11:34.417 "product_name": "Malloc disk", 00:11:34.417 "block_size": 512, 00:11:34.417 "num_blocks": 65536, 00:11:34.417 "uuid": "aa849fa1-4a2e-11ef-9c8e-7947904e2597", 00:11:34.417 "assigned_rate_limits": { 00:11:34.417 "rw_ios_per_sec": 0, 00:11:34.417 "rw_mbytes_per_sec": 0, 00:11:34.417 "r_mbytes_per_sec": 0, 00:11:34.417 "w_mbytes_per_sec": 0 00:11:34.417 }, 00:11:34.417 "claimed": true, 00:11:34.417 "claim_type": "exclusive_write", 00:11:34.417 "zoned": false, 00:11:34.417 "supported_io_types": { 00:11:34.417 "read": true, 00:11:34.417 "write": true, 00:11:34.417 "unmap": true, 00:11:34.417 "flush": true, 00:11:34.417 "reset": true, 00:11:34.417 "nvme_admin": false, 00:11:34.417 "nvme_io": false, 00:11:34.417 "nvme_io_md": false, 00:11:34.417 "write_zeroes": true, 00:11:34.417 "zcopy": true, 00:11:34.417 "get_zone_info": false, 00:11:34.417 "zone_management": false, 00:11:34.417 "zone_append": false, 00:11:34.417 "compare": false, 00:11:34.417 "compare_and_write": false, 00:11:34.417 "abort": true, 00:11:34.417 "seek_hole": false, 00:11:34.417 "seek_data": false, 00:11:34.417 "copy": true, 00:11:34.417 "nvme_iov_md": false 00:11:34.417 }, 00:11:34.417 "memory_domains": [ 00:11:34.417 { 00:11:34.417 "dma_device_id": "system", 00:11:34.417 "dma_device_type": 1 00:11:34.417 }, 00:11:34.417 { 00:11:34.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.417 "dma_device_type": 2 00:11:34.417 } 00:11:34.417 ], 00:11:34.417 "driver_specific": {} 00:11:34.417 }' 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:34.417 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.677 "name": "BaseBdev2", 00:11:34.677 "aliases": [ 00:11:34.677 "ab96142a-4a2e-11ef-9c8e-7947904e2597" 00:11:34.677 ], 00:11:34.677 "product_name": "Malloc disk", 00:11:34.677 "block_size": 512, 00:11:34.677 "num_blocks": 65536, 00:11:34.677 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:34.677 "assigned_rate_limits": { 00:11:34.677 "rw_ios_per_sec": 0, 00:11:34.677 "rw_mbytes_per_sec": 0, 00:11:34.677 "r_mbytes_per_sec": 0, 00:11:34.677 "w_mbytes_per_sec": 0 00:11:34.677 }, 00:11:34.677 "claimed": true, 00:11:34.677 "claim_type": "exclusive_write", 00:11:34.677 "zoned": false, 00:11:34.677 "supported_io_types": { 00:11:34.677 "read": true, 00:11:34.677 "write": true, 00:11:34.677 "unmap": true, 00:11:34.677 "flush": true, 00:11:34.677 "reset": true, 00:11:34.677 "nvme_admin": false, 00:11:34.677 "nvme_io": false, 00:11:34.677 "nvme_io_md": false, 00:11:34.677 "write_zeroes": true, 00:11:34.677 "zcopy": true, 00:11:34.677 "get_zone_info": false, 00:11:34.677 "zone_management": false, 00:11:34.677 "zone_append": false, 00:11:34.677 "compare": false, 00:11:34.677 "compare_and_write": false, 00:11:34.677 "abort": true, 00:11:34.677 "seek_hole": false, 00:11:34.677 "seek_data": false, 00:11:34.677 "copy": true, 00:11:34.677 "nvme_iov_md": false 00:11:34.677 }, 00:11:34.677 "memory_domains": [ 00:11:34.677 { 00:11:34.677 "dma_device_id": "system", 00:11:34.677 "dma_device_type": 1 00:11:34.677 }, 00:11:34.677 { 00:11:34.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.677 "dma_device_type": 2 00:11:34.677 } 00:11:34.677 ], 00:11:34.677 "driver_specific": {} 00:11:34.677 }' 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:34.677 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:34.937 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:34.937 "name": "BaseBdev3", 00:11:34.937 "aliases": [ 00:11:34.937 "ac27f5a8-4a2e-11ef-9c8e-7947904e2597" 00:11:34.937 ], 00:11:34.937 "product_name": "Malloc disk", 00:11:34.937 "block_size": 512, 00:11:34.937 "num_blocks": 65536, 00:11:34.937 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:34.937 "assigned_rate_limits": { 00:11:34.937 "rw_ios_per_sec": 0, 00:11:34.937 "rw_mbytes_per_sec": 0, 00:11:34.937 "r_mbytes_per_sec": 0, 00:11:34.937 "w_mbytes_per_sec": 0 00:11:34.937 }, 00:11:34.937 "claimed": true, 00:11:34.937 "claim_type": "exclusive_write", 00:11:34.937 "zoned": false, 00:11:34.937 "supported_io_types": { 00:11:34.937 "read": true, 00:11:34.937 "write": true, 00:11:34.937 "unmap": true, 00:11:34.938 "flush": true, 00:11:34.938 "reset": true, 00:11:34.938 "nvme_admin": false, 00:11:34.938 "nvme_io": false, 00:11:34.938 "nvme_io_md": false, 00:11:34.938 "write_zeroes": true, 00:11:34.938 "zcopy": true, 00:11:34.938 "get_zone_info": false, 00:11:34.938 "zone_management": false, 00:11:34.938 "zone_append": false, 00:11:34.938 "compare": false, 00:11:34.938 "compare_and_write": false, 00:11:34.938 "abort": true, 00:11:34.938 "seek_hole": false, 00:11:34.938 "seek_data": false, 00:11:34.938 "copy": true, 00:11:34.938 "nvme_iov_md": false 00:11:34.938 }, 00:11:34.938 "memory_domains": [ 00:11:34.938 { 00:11:34.938 "dma_device_id": "system", 00:11:34.938 "dma_device_type": 1 00:11:34.938 }, 00:11:34.938 { 00:11:34.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.938 "dma_device_type": 2 00:11:34.938 } 00:11:34.938 ], 00:11:34.938 "driver_specific": {} 00:11:34.938 }' 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:11:34.938 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.198 "name": "BaseBdev4", 00:11:35.198 "aliases": [ 00:11:35.198 "acbd7fea-4a2e-11ef-9c8e-7947904e2597" 00:11:35.198 ], 00:11:35.198 "product_name": "Malloc disk", 00:11:35.198 "block_size": 512, 00:11:35.198 "num_blocks": 65536, 00:11:35.198 "uuid": "acbd7fea-4a2e-11ef-9c8e-7947904e2597", 00:11:35.198 "assigned_rate_limits": { 00:11:35.198 "rw_ios_per_sec": 0, 00:11:35.198 "rw_mbytes_per_sec": 0, 00:11:35.198 "r_mbytes_per_sec": 0, 00:11:35.198 "w_mbytes_per_sec": 0 00:11:35.198 }, 00:11:35.198 "claimed": true, 00:11:35.198 "claim_type": "exclusive_write", 00:11:35.198 "zoned": false, 00:11:35.198 "supported_io_types": { 00:11:35.198 "read": true, 00:11:35.198 "write": true, 00:11:35.198 "unmap": true, 00:11:35.198 "flush": true, 00:11:35.198 "reset": true, 00:11:35.198 "nvme_admin": false, 00:11:35.198 "nvme_io": false, 00:11:35.198 "nvme_io_md": false, 00:11:35.198 "write_zeroes": true, 00:11:35.198 "zcopy": true, 00:11:35.198 "get_zone_info": false, 00:11:35.198 "zone_management": false, 00:11:35.198 "zone_append": false, 00:11:35.198 "compare": false, 00:11:35.198 "compare_and_write": false, 00:11:35.198 "abort": true, 00:11:35.198 "seek_hole": false, 00:11:35.198 "seek_data": false, 00:11:35.198 "copy": true, 00:11:35.198 "nvme_iov_md": false 00:11:35.198 }, 00:11:35.198 "memory_domains": [ 00:11:35.198 { 00:11:35.198 "dma_device_id": "system", 00:11:35.198 "dma_device_type": 1 00:11:35.198 }, 00:11:35.198 { 00:11:35.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.198 "dma_device_type": 2 00:11:35.198 } 00:11:35.198 ], 00:11:35.198 "driver_specific": {} 00:11:35.198 }' 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.198 02:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:35.458 [2024-07-25 02:36:22.154719] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.458 [2024-07-25 02:36:22.154731] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.458 [2024-07-25 02:36:22.154740] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.458 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:35.458 "name": "Existed_Raid", 00:11:35.458 "uuid": "ab346b0b-4a2e-11ef-9c8e-7947904e2597", 00:11:35.458 "strip_size_kb": 64, 00:11:35.458 "state": "offline", 00:11:35.458 "raid_level": "raid0", 00:11:35.458 "superblock": true, 00:11:35.458 "num_base_bdevs": 4, 00:11:35.458 "num_base_bdevs_discovered": 3, 00:11:35.458 "num_base_bdevs_operational": 3, 00:11:35.458 "base_bdevs_list": [ 00:11:35.458 { 00:11:35.458 "name": null, 00:11:35.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.458 "is_configured": false, 00:11:35.458 "data_offset": 2048, 00:11:35.458 "data_size": 63488 00:11:35.458 }, 00:11:35.458 { 00:11:35.458 "name": "BaseBdev2", 00:11:35.458 "uuid": "ab96142a-4a2e-11ef-9c8e-7947904e2597", 00:11:35.458 "is_configured": true, 00:11:35.458 "data_offset": 2048, 00:11:35.458 "data_size": 63488 00:11:35.458 }, 00:11:35.458 { 00:11:35.458 "name": "BaseBdev3", 00:11:35.458 "uuid": "ac27f5a8-4a2e-11ef-9c8e-7947904e2597", 00:11:35.458 "is_configured": true, 00:11:35.458 "data_offset": 2048, 00:11:35.458 "data_size": 63488 00:11:35.458 }, 00:11:35.458 { 00:11:35.458 "name": "BaseBdev4", 00:11:35.458 "uuid": "acbd7fea-4a2e-11ef-9c8e-7947904e2597", 00:11:35.458 "is_configured": true, 00:11:35.458 "data_offset": 2048, 00:11:35.458 "data_size": 63488 00:11:35.459 } 00:11:35.459 ] 00:11:35.459 }' 00:11:35.459 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:35.459 02:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.718 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:35.718 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:35.718 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.718 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:35.978 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:35.978 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.978 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:36.239 [2024-07-25 02:36:22.955437] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.239 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:36.239 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:36.239 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.239 02:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:36.499 [2024-07-25 02:36:23.320106] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.499 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:36.758 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:36.758 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.758 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:11:37.018 [2024-07-25 02:36:23.684780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.018 [2024-07-25 02:36:23.684797] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec079e34a00 name Existed_Raid, state offline 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:37.018 02:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.278 BaseBdev2 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:37.278 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.538 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.538 [ 00:11:37.538 { 00:11:37.538 "name": "BaseBdev2", 00:11:37.538 "aliases": [ 00:11:37.538 "af39514f-4a2e-11ef-9c8e-7947904e2597" 00:11:37.538 ], 00:11:37.538 "product_name": "Malloc disk", 00:11:37.538 "block_size": 512, 00:11:37.538 "num_blocks": 65536, 00:11:37.538 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:37.538 "assigned_rate_limits": { 00:11:37.538 "rw_ios_per_sec": 0, 00:11:37.538 "rw_mbytes_per_sec": 0, 00:11:37.538 "r_mbytes_per_sec": 0, 00:11:37.538 "w_mbytes_per_sec": 0 00:11:37.538 }, 00:11:37.538 "claimed": false, 00:11:37.538 "zoned": false, 00:11:37.538 "supported_io_types": { 00:11:37.538 "read": true, 00:11:37.538 "write": true, 00:11:37.538 "unmap": true, 00:11:37.538 "flush": true, 00:11:37.538 "reset": true, 00:11:37.538 "nvme_admin": false, 00:11:37.538 "nvme_io": false, 00:11:37.538 "nvme_io_md": false, 00:11:37.538 "write_zeroes": true, 00:11:37.538 "zcopy": true, 00:11:37.538 "get_zone_info": false, 00:11:37.538 "zone_management": false, 00:11:37.538 "zone_append": false, 00:11:37.538 "compare": false, 00:11:37.538 "compare_and_write": false, 00:11:37.538 "abort": true, 00:11:37.538 "seek_hole": false, 00:11:37.538 "seek_data": false, 00:11:37.538 "copy": true, 00:11:37.538 "nvme_iov_md": false 00:11:37.538 }, 00:11:37.538 "memory_domains": [ 00:11:37.538 { 00:11:37.538 "dma_device_id": "system", 00:11:37.538 "dma_device_type": 1 00:11:37.538 }, 00:11:37.538 { 00:11:37.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.538 "dma_device_type": 2 00:11:37.538 } 00:11:37.538 ], 00:11:37.538 "driver_specific": {} 00:11:37.538 } 00:11:37.538 ] 00:11:37.538 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:37.538 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:37.538 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:37.538 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.798 BaseBdev3 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:37.798 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.057 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.057 [ 00:11:38.057 { 00:11:38.057 "name": "BaseBdev3", 00:11:38.057 "aliases": [ 00:11:38.057 "af8c540e-4a2e-11ef-9c8e-7947904e2597" 00:11:38.057 ], 00:11:38.057 "product_name": "Malloc disk", 00:11:38.057 "block_size": 512, 00:11:38.057 "num_blocks": 65536, 00:11:38.057 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:38.057 "assigned_rate_limits": { 00:11:38.057 "rw_ios_per_sec": 0, 00:11:38.057 "rw_mbytes_per_sec": 0, 00:11:38.057 "r_mbytes_per_sec": 0, 00:11:38.057 "w_mbytes_per_sec": 0 00:11:38.057 }, 00:11:38.057 "claimed": false, 00:11:38.057 "zoned": false, 00:11:38.057 "supported_io_types": { 00:11:38.058 "read": true, 00:11:38.058 "write": true, 00:11:38.058 "unmap": true, 00:11:38.058 "flush": true, 00:11:38.058 "reset": true, 00:11:38.058 "nvme_admin": false, 00:11:38.058 "nvme_io": false, 00:11:38.058 "nvme_io_md": false, 00:11:38.058 "write_zeroes": true, 00:11:38.058 "zcopy": true, 00:11:38.058 "get_zone_info": false, 00:11:38.058 "zone_management": false, 00:11:38.058 "zone_append": false, 00:11:38.058 "compare": false, 00:11:38.058 "compare_and_write": false, 00:11:38.058 "abort": true, 00:11:38.058 "seek_hole": false, 00:11:38.058 "seek_data": false, 00:11:38.058 "copy": true, 00:11:38.058 "nvme_iov_md": false 00:11:38.058 }, 00:11:38.058 "memory_domains": [ 00:11:38.058 { 00:11:38.058 "dma_device_id": "system", 00:11:38.058 "dma_device_type": 1 00:11:38.058 }, 00:11:38.058 { 00:11:38.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.058 "dma_device_type": 2 00:11:38.058 } 00:11:38.058 ], 00:11:38.058 "driver_specific": {} 00:11:38.058 } 00:11:38.058 ] 00:11:38.318 02:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:38.318 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:38.318 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:38.318 02:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:38.318 BaseBdev4 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.318 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.577 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:38.838 [ 00:11:38.838 { 00:11:38.838 "name": "BaseBdev4", 00:11:38.838 "aliases": [ 00:11:38.838 "afdf56d0-4a2e-11ef-9c8e-7947904e2597" 00:11:38.838 ], 00:11:38.838 "product_name": "Malloc disk", 00:11:38.838 "block_size": 512, 00:11:38.838 "num_blocks": 65536, 00:11:38.838 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:38.838 "assigned_rate_limits": { 00:11:38.838 "rw_ios_per_sec": 0, 00:11:38.838 "rw_mbytes_per_sec": 0, 00:11:38.838 "r_mbytes_per_sec": 0, 00:11:38.838 "w_mbytes_per_sec": 0 00:11:38.838 }, 00:11:38.838 "claimed": false, 00:11:38.838 "zoned": false, 00:11:38.838 "supported_io_types": { 00:11:38.838 "read": true, 00:11:38.838 "write": true, 00:11:38.838 "unmap": true, 00:11:38.838 "flush": true, 00:11:38.838 "reset": true, 00:11:38.838 "nvme_admin": false, 00:11:38.838 "nvme_io": false, 00:11:38.838 "nvme_io_md": false, 00:11:38.838 "write_zeroes": true, 00:11:38.838 "zcopy": true, 00:11:38.838 "get_zone_info": false, 00:11:38.838 "zone_management": false, 00:11:38.838 "zone_append": false, 00:11:38.838 "compare": false, 00:11:38.838 "compare_and_write": false, 00:11:38.838 "abort": true, 00:11:38.838 "seek_hole": false, 00:11:38.838 "seek_data": false, 00:11:38.838 "copy": true, 00:11:38.838 "nvme_iov_md": false 00:11:38.838 }, 00:11:38.838 "memory_domains": [ 00:11:38.838 { 00:11:38.838 "dma_device_id": "system", 00:11:38.838 "dma_device_type": 1 00:11:38.838 }, 00:11:38.838 { 00:11:38.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.838 "dma_device_type": 2 00:11:38.838 } 00:11:38.838 ], 00:11:38.838 "driver_specific": {} 00:11:38.838 } 00:11:38.838 ] 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:38.838 [2024-07-25 02:36:25.681557] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.838 [2024-07-25 02:36:25.681596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.838 [2024-07-25 02:36:25.681601] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.838 [2024-07-25 02:36:25.682022] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.838 [2024-07-25 02:36:25.682040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.838 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.098 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.098 "name": "Existed_Raid", 00:11:39.098 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:39.098 "strip_size_kb": 64, 00:11:39.098 "state": "configuring", 00:11:39.098 "raid_level": "raid0", 00:11:39.098 "superblock": true, 00:11:39.098 "num_base_bdevs": 4, 00:11:39.098 "num_base_bdevs_discovered": 3, 00:11:39.098 "num_base_bdevs_operational": 4, 00:11:39.098 "base_bdevs_list": [ 00:11:39.098 { 00:11:39.098 "name": "BaseBdev1", 00:11:39.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.098 "is_configured": false, 00:11:39.098 "data_offset": 0, 00:11:39.098 "data_size": 0 00:11:39.098 }, 00:11:39.098 { 00:11:39.098 "name": "BaseBdev2", 00:11:39.098 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:39.098 "is_configured": true, 00:11:39.098 "data_offset": 2048, 00:11:39.098 "data_size": 63488 00:11:39.098 }, 00:11:39.098 { 00:11:39.098 "name": "BaseBdev3", 00:11:39.098 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:39.098 "is_configured": true, 00:11:39.098 "data_offset": 2048, 00:11:39.098 "data_size": 63488 00:11:39.098 }, 00:11:39.098 { 00:11:39.098 "name": "BaseBdev4", 00:11:39.098 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:39.098 "is_configured": true, 00:11:39.098 "data_offset": 2048, 00:11:39.098 "data_size": 63488 00:11:39.098 } 00:11:39.098 ] 00:11:39.098 }' 00:11:39.098 02:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.098 02:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.358 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:39.618 [2024-07-25 02:36:26.325574] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.618 "name": "Existed_Raid", 00:11:39.618 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:39.618 "strip_size_kb": 64, 00:11:39.618 "state": "configuring", 00:11:39.618 "raid_level": "raid0", 00:11:39.618 "superblock": true, 00:11:39.618 "num_base_bdevs": 4, 00:11:39.618 "num_base_bdevs_discovered": 2, 00:11:39.618 "num_base_bdevs_operational": 4, 00:11:39.618 "base_bdevs_list": [ 00:11:39.618 { 00:11:39.618 "name": "BaseBdev1", 00:11:39.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.618 "is_configured": false, 00:11:39.618 "data_offset": 0, 00:11:39.618 "data_size": 0 00:11:39.618 }, 00:11:39.618 { 00:11:39.618 "name": null, 00:11:39.618 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:39.618 "is_configured": false, 00:11:39.618 "data_offset": 2048, 00:11:39.618 "data_size": 63488 00:11:39.618 }, 00:11:39.618 { 00:11:39.618 "name": "BaseBdev3", 00:11:39.618 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:39.618 "is_configured": true, 00:11:39.618 "data_offset": 2048, 00:11:39.618 "data_size": 63488 00:11:39.618 }, 00:11:39.618 { 00:11:39.618 "name": "BaseBdev4", 00:11:39.618 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:39.618 "is_configured": true, 00:11:39.618 "data_offset": 2048, 00:11:39.618 "data_size": 63488 00:11:39.618 } 00:11:39.618 ] 00:11:39.618 }' 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.618 02:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.188 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.188 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.188 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:40.188 02:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.448 [2024-07-25 02:36:27.141696] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.448 BaseBdev1 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:40.448 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.708 [ 00:11:40.708 { 00:11:40.708 "name": "BaseBdev1", 00:11:40.708 "aliases": [ 00:11:40.708 "b1112389-4a2e-11ef-9c8e-7947904e2597" 00:11:40.708 ], 00:11:40.708 "product_name": "Malloc disk", 00:11:40.708 "block_size": 512, 00:11:40.708 "num_blocks": 65536, 00:11:40.708 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:40.708 "assigned_rate_limits": { 00:11:40.708 "rw_ios_per_sec": 0, 00:11:40.708 "rw_mbytes_per_sec": 0, 00:11:40.708 "r_mbytes_per_sec": 0, 00:11:40.708 "w_mbytes_per_sec": 0 00:11:40.708 }, 00:11:40.708 "claimed": true, 00:11:40.708 "claim_type": "exclusive_write", 00:11:40.708 "zoned": false, 00:11:40.708 "supported_io_types": { 00:11:40.708 "read": true, 00:11:40.708 "write": true, 00:11:40.708 "unmap": true, 00:11:40.708 "flush": true, 00:11:40.708 "reset": true, 00:11:40.708 "nvme_admin": false, 00:11:40.708 "nvme_io": false, 00:11:40.708 "nvme_io_md": false, 00:11:40.708 "write_zeroes": true, 00:11:40.708 "zcopy": true, 00:11:40.708 "get_zone_info": false, 00:11:40.708 "zone_management": false, 00:11:40.708 "zone_append": false, 00:11:40.708 "compare": false, 00:11:40.708 "compare_and_write": false, 00:11:40.708 "abort": true, 00:11:40.708 "seek_hole": false, 00:11:40.708 "seek_data": false, 00:11:40.708 "copy": true, 00:11:40.708 "nvme_iov_md": false 00:11:40.708 }, 00:11:40.708 "memory_domains": [ 00:11:40.708 { 00:11:40.708 "dma_device_id": "system", 00:11:40.708 "dma_device_type": 1 00:11:40.708 }, 00:11:40.708 { 00:11:40.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.709 "dma_device_type": 2 00:11:40.709 } 00:11:40.709 ], 00:11:40.709 "driver_specific": {} 00:11:40.709 } 00:11:40.709 ] 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.709 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.968 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.968 "name": "Existed_Raid", 00:11:40.968 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:40.968 "strip_size_kb": 64, 00:11:40.968 "state": "configuring", 00:11:40.968 "raid_level": "raid0", 00:11:40.968 "superblock": true, 00:11:40.968 "num_base_bdevs": 4, 00:11:40.968 "num_base_bdevs_discovered": 3, 00:11:40.968 "num_base_bdevs_operational": 4, 00:11:40.968 "base_bdevs_list": [ 00:11:40.968 { 00:11:40.968 "name": "BaseBdev1", 00:11:40.968 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:40.968 "is_configured": true, 00:11:40.968 "data_offset": 2048, 00:11:40.968 "data_size": 63488 00:11:40.968 }, 00:11:40.968 { 00:11:40.968 "name": null, 00:11:40.968 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:40.968 "is_configured": false, 00:11:40.968 "data_offset": 2048, 00:11:40.968 "data_size": 63488 00:11:40.968 }, 00:11:40.968 { 00:11:40.968 "name": "BaseBdev3", 00:11:40.968 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:40.968 "is_configured": true, 00:11:40.968 "data_offset": 2048, 00:11:40.968 "data_size": 63488 00:11:40.968 }, 00:11:40.968 { 00:11:40.968 "name": "BaseBdev4", 00:11:40.968 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:40.968 "is_configured": true, 00:11:40.968 "data_offset": 2048, 00:11:40.968 "data_size": 63488 00:11:40.968 } 00:11:40.968 ] 00:11:40.968 }' 00:11:40.968 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.968 02:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.227 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.227 02:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.227 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:41.227 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:41.487 [2024-07-25 02:36:28.289682] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.487 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.747 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.747 "name": "Existed_Raid", 00:11:41.747 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:41.747 "strip_size_kb": 64, 00:11:41.747 "state": "configuring", 00:11:41.747 "raid_level": "raid0", 00:11:41.747 "superblock": true, 00:11:41.747 "num_base_bdevs": 4, 00:11:41.747 "num_base_bdevs_discovered": 2, 00:11:41.747 "num_base_bdevs_operational": 4, 00:11:41.747 "base_bdevs_list": [ 00:11:41.747 { 00:11:41.747 "name": "BaseBdev1", 00:11:41.747 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:41.747 "is_configured": true, 00:11:41.747 "data_offset": 2048, 00:11:41.747 "data_size": 63488 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": null, 00:11:41.747 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:41.747 "is_configured": false, 00:11:41.747 "data_offset": 2048, 00:11:41.747 "data_size": 63488 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": null, 00:11:41.747 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:41.747 "is_configured": false, 00:11:41.747 "data_offset": 2048, 00:11:41.747 "data_size": 63488 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": "BaseBdev4", 00:11:41.747 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:41.747 "is_configured": true, 00:11:41.747 "data_offset": 2048, 00:11:41.747 "data_size": 63488 00:11:41.747 } 00:11:41.747 ] 00:11:41.747 }' 00:11:41.747 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.747 02:36:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.007 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.007 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.279 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:42.279 02:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.279 [2024-07-25 02:36:29.097727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.279 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.567 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.567 "name": "Existed_Raid", 00:11:42.567 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:42.567 "strip_size_kb": 64, 00:11:42.567 "state": "configuring", 00:11:42.567 "raid_level": "raid0", 00:11:42.567 "superblock": true, 00:11:42.567 "num_base_bdevs": 4, 00:11:42.567 "num_base_bdevs_discovered": 3, 00:11:42.567 "num_base_bdevs_operational": 4, 00:11:42.567 "base_bdevs_list": [ 00:11:42.567 { 00:11:42.567 "name": "BaseBdev1", 00:11:42.567 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:42.567 "is_configured": true, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 }, 00:11:42.567 { 00:11:42.567 "name": null, 00:11:42.567 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:42.567 "is_configured": false, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 }, 00:11:42.567 { 00:11:42.567 "name": "BaseBdev3", 00:11:42.567 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:42.567 "is_configured": true, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 }, 00:11:42.567 { 00:11:42.567 "name": "BaseBdev4", 00:11:42.567 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:42.567 "is_configured": true, 00:11:42.567 "data_offset": 2048, 00:11:42.567 "data_size": 63488 00:11:42.567 } 00:11:42.567 ] 00:11:42.567 }' 00:11:42.567 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.567 02:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.827 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.827 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:43.087 [2024-07-25 02:36:29.925799] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.087 02:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.347 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.347 "name": "Existed_Raid", 00:11:43.347 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:43.347 "strip_size_kb": 64, 00:11:43.347 "state": "configuring", 00:11:43.347 "raid_level": "raid0", 00:11:43.347 "superblock": true, 00:11:43.347 "num_base_bdevs": 4, 00:11:43.347 "num_base_bdevs_discovered": 2, 00:11:43.347 "num_base_bdevs_operational": 4, 00:11:43.347 "base_bdevs_list": [ 00:11:43.347 { 00:11:43.347 "name": null, 00:11:43.347 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:43.347 "is_configured": false, 00:11:43.347 "data_offset": 2048, 00:11:43.347 "data_size": 63488 00:11:43.347 }, 00:11:43.347 { 00:11:43.347 "name": null, 00:11:43.347 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:43.347 "is_configured": false, 00:11:43.347 "data_offset": 2048, 00:11:43.347 "data_size": 63488 00:11:43.347 }, 00:11:43.347 { 00:11:43.347 "name": "BaseBdev3", 00:11:43.347 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:43.347 "is_configured": true, 00:11:43.347 "data_offset": 2048, 00:11:43.347 "data_size": 63488 00:11:43.347 }, 00:11:43.347 { 00:11:43.347 "name": "BaseBdev4", 00:11:43.347 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:43.347 "is_configured": true, 00:11:43.347 "data_offset": 2048, 00:11:43.347 "data_size": 63488 00:11:43.347 } 00:11:43.347 ] 00:11:43.347 }' 00:11:43.347 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.347 02:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.607 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.607 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.867 [2024-07-25 02:36:30.734465] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.867 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.126 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.126 "name": "Existed_Raid", 00:11:44.126 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:44.126 "strip_size_kb": 64, 00:11:44.126 "state": "configuring", 00:11:44.126 "raid_level": "raid0", 00:11:44.126 "superblock": true, 00:11:44.126 "num_base_bdevs": 4, 00:11:44.126 "num_base_bdevs_discovered": 3, 00:11:44.126 "num_base_bdevs_operational": 4, 00:11:44.126 "base_bdevs_list": [ 00:11:44.126 { 00:11:44.126 "name": null, 00:11:44.126 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:44.126 "is_configured": false, 00:11:44.126 "data_offset": 2048, 00:11:44.126 "data_size": 63488 00:11:44.127 }, 00:11:44.127 { 00:11:44.127 "name": "BaseBdev2", 00:11:44.127 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:44.127 "is_configured": true, 00:11:44.127 "data_offset": 2048, 00:11:44.127 "data_size": 63488 00:11:44.127 }, 00:11:44.127 { 00:11:44.127 "name": "BaseBdev3", 00:11:44.127 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:44.127 "is_configured": true, 00:11:44.127 "data_offset": 2048, 00:11:44.127 "data_size": 63488 00:11:44.127 }, 00:11:44.127 { 00:11:44.127 "name": "BaseBdev4", 00:11:44.127 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:44.127 "is_configured": true, 00:11:44.127 "data_offset": 2048, 00:11:44.127 "data_size": 63488 00:11:44.127 } 00:11:44.127 ] 00:11:44.127 }' 00:11:44.127 02:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.127 02:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.386 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.386 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.645 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:44.645 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.645 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.905 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b1112389-4a2e-11ef-9c8e-7947904e2597 00:11:44.905 [2024-07-25 02:36:31.738604] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.906 [2024-07-25 02:36:31.738637] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0xec079e34f00 00:11:44.906 [2024-07-25 02:36:31.738640] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.906 [2024-07-25 02:36:31.738655] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xec079e97e20 00:11:44.906 [2024-07-25 02:36:31.738685] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xec079e34f00 00:11:44.906 [2024-07-25 02:36:31.738687] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xec079e34f00 00:11:44.906 [2024-07-25 02:36:31.738701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.906 NewBaseBdev 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:44.906 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:45.165 02:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:45.425 [ 00:11:45.425 { 00:11:45.425 "name": "NewBaseBdev", 00:11:45.425 "aliases": [ 00:11:45.425 "b1112389-4a2e-11ef-9c8e-7947904e2597" 00:11:45.425 ], 00:11:45.425 "product_name": "Malloc disk", 00:11:45.425 "block_size": 512, 00:11:45.425 "num_blocks": 65536, 00:11:45.425 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "assigned_rate_limits": { 00:11:45.425 "rw_ios_per_sec": 0, 00:11:45.425 "rw_mbytes_per_sec": 0, 00:11:45.425 "r_mbytes_per_sec": 0, 00:11:45.425 "w_mbytes_per_sec": 0 00:11:45.425 }, 00:11:45.425 "claimed": true, 00:11:45.425 "claim_type": "exclusive_write", 00:11:45.425 "zoned": false, 00:11:45.425 "supported_io_types": { 00:11:45.425 "read": true, 00:11:45.425 "write": true, 00:11:45.425 "unmap": true, 00:11:45.425 "flush": true, 00:11:45.425 "reset": true, 00:11:45.425 "nvme_admin": false, 00:11:45.425 "nvme_io": false, 00:11:45.425 "nvme_io_md": false, 00:11:45.425 "write_zeroes": true, 00:11:45.425 "zcopy": true, 00:11:45.425 "get_zone_info": false, 00:11:45.425 "zone_management": false, 00:11:45.425 "zone_append": false, 00:11:45.425 "compare": false, 00:11:45.425 "compare_and_write": false, 00:11:45.425 "abort": true, 00:11:45.425 "seek_hole": false, 00:11:45.425 "seek_data": false, 00:11:45.425 "copy": true, 00:11:45.425 "nvme_iov_md": false 00:11:45.425 }, 00:11:45.425 "memory_domains": [ 00:11:45.425 { 00:11:45.425 "dma_device_id": "system", 00:11:45.425 "dma_device_type": 1 00:11:45.425 }, 00:11:45.425 { 00:11:45.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.425 "dma_device_type": 2 00:11:45.425 } 00:11:45.425 ], 00:11:45.425 "driver_specific": {} 00:11:45.425 } 00:11:45.425 ] 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.425 "name": "Existed_Raid", 00:11:45.425 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "strip_size_kb": 64, 00:11:45.425 "state": "online", 00:11:45.425 "raid_level": "raid0", 00:11:45.425 "superblock": true, 00:11:45.425 "num_base_bdevs": 4, 00:11:45.425 "num_base_bdevs_discovered": 4, 00:11:45.425 "num_base_bdevs_operational": 4, 00:11:45.425 "base_bdevs_list": [ 00:11:45.425 { 00:11:45.425 "name": "NewBaseBdev", 00:11:45.425 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "is_configured": true, 00:11:45.425 "data_offset": 2048, 00:11:45.425 "data_size": 63488 00:11:45.425 }, 00:11:45.425 { 00:11:45.425 "name": "BaseBdev2", 00:11:45.425 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "is_configured": true, 00:11:45.425 "data_offset": 2048, 00:11:45.425 "data_size": 63488 00:11:45.425 }, 00:11:45.425 { 00:11:45.425 "name": "BaseBdev3", 00:11:45.425 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "is_configured": true, 00:11:45.425 "data_offset": 2048, 00:11:45.425 "data_size": 63488 00:11:45.425 }, 00:11:45.425 { 00:11:45.425 "name": "BaseBdev4", 00:11:45.425 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:45.425 "is_configured": true, 00:11:45.425 "data_offset": 2048, 00:11:45.425 "data_size": 63488 00:11:45.425 } 00:11:45.425 ] 00:11:45.425 }' 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.425 02:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:45.685 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:45.945 [2024-07-25 02:36:32.730593] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.945 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:45.945 "name": "Existed_Raid", 00:11:45.945 "aliases": [ 00:11:45.945 "b03259ba-4a2e-11ef-9c8e-7947904e2597" 00:11:45.945 ], 00:11:45.945 "product_name": "Raid Volume", 00:11:45.945 "block_size": 512, 00:11:45.945 "num_blocks": 253952, 00:11:45.945 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:45.945 "assigned_rate_limits": { 00:11:45.945 "rw_ios_per_sec": 0, 00:11:45.945 "rw_mbytes_per_sec": 0, 00:11:45.945 "r_mbytes_per_sec": 0, 00:11:45.945 "w_mbytes_per_sec": 0 00:11:45.945 }, 00:11:45.945 "claimed": false, 00:11:45.945 "zoned": false, 00:11:45.945 "supported_io_types": { 00:11:45.945 "read": true, 00:11:45.945 "write": true, 00:11:45.945 "unmap": true, 00:11:45.945 "flush": true, 00:11:45.945 "reset": true, 00:11:45.945 "nvme_admin": false, 00:11:45.945 "nvme_io": false, 00:11:45.945 "nvme_io_md": false, 00:11:45.945 "write_zeroes": true, 00:11:45.945 "zcopy": false, 00:11:45.945 "get_zone_info": false, 00:11:45.945 "zone_management": false, 00:11:45.945 "zone_append": false, 00:11:45.945 "compare": false, 00:11:45.945 "compare_and_write": false, 00:11:45.945 "abort": false, 00:11:45.945 "seek_hole": false, 00:11:45.945 "seek_data": false, 00:11:45.945 "copy": false, 00:11:45.945 "nvme_iov_md": false 00:11:45.945 }, 00:11:45.945 "memory_domains": [ 00:11:45.945 { 00:11:45.945 "dma_device_id": "system", 00:11:45.945 "dma_device_type": 1 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.945 "dma_device_type": 2 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "system", 00:11:45.945 "dma_device_type": 1 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.945 "dma_device_type": 2 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "system", 00:11:45.945 "dma_device_type": 1 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.945 "dma_device_type": 2 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "system", 00:11:45.945 "dma_device_type": 1 00:11:45.945 }, 00:11:45.945 { 00:11:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.945 "dma_device_type": 2 00:11:45.945 } 00:11:45.945 ], 00:11:45.945 "driver_specific": { 00:11:45.945 "raid": { 00:11:45.945 "uuid": "b03259ba-4a2e-11ef-9c8e-7947904e2597", 00:11:45.945 "strip_size_kb": 64, 00:11:45.945 "state": "online", 00:11:45.945 "raid_level": "raid0", 00:11:45.945 "superblock": true, 00:11:45.945 "num_base_bdevs": 4, 00:11:45.945 "num_base_bdevs_discovered": 4, 00:11:45.945 "num_base_bdevs_operational": 4, 00:11:45.945 "base_bdevs_list": [ 00:11:45.945 { 00:11:45.945 "name": "NewBaseBdev", 00:11:45.945 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:45.945 "is_configured": true, 00:11:45.946 "data_offset": 2048, 00:11:45.946 "data_size": 63488 00:11:45.946 }, 00:11:45.946 { 00:11:45.946 "name": "BaseBdev2", 00:11:45.946 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:45.946 "is_configured": true, 00:11:45.946 "data_offset": 2048, 00:11:45.946 "data_size": 63488 00:11:45.946 }, 00:11:45.946 { 00:11:45.946 "name": "BaseBdev3", 00:11:45.946 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:45.946 "is_configured": true, 00:11:45.946 "data_offset": 2048, 00:11:45.946 "data_size": 63488 00:11:45.946 }, 00:11:45.946 { 00:11:45.946 "name": "BaseBdev4", 00:11:45.946 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:45.946 "is_configured": true, 00:11:45.946 "data_offset": 2048, 00:11:45.946 "data_size": 63488 00:11:45.946 } 00:11:45.946 ] 00:11:45.946 } 00:11:45.946 } 00:11:45.946 }' 00:11:45.946 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.946 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:45.946 BaseBdev2 00:11:45.946 BaseBdev3 00:11:45.946 BaseBdev4' 00:11:45.946 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.946 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:45.946 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.205 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.205 "name": "NewBaseBdev", 00:11:46.206 "aliases": [ 00:11:46.206 "b1112389-4a2e-11ef-9c8e-7947904e2597" 00:11:46.206 ], 00:11:46.206 "product_name": "Malloc disk", 00:11:46.206 "block_size": 512, 00:11:46.206 "num_blocks": 65536, 00:11:46.206 "uuid": "b1112389-4a2e-11ef-9c8e-7947904e2597", 00:11:46.206 "assigned_rate_limits": { 00:11:46.206 "rw_ios_per_sec": 0, 00:11:46.206 "rw_mbytes_per_sec": 0, 00:11:46.206 "r_mbytes_per_sec": 0, 00:11:46.206 "w_mbytes_per_sec": 0 00:11:46.206 }, 00:11:46.206 "claimed": true, 00:11:46.206 "claim_type": "exclusive_write", 00:11:46.206 "zoned": false, 00:11:46.206 "supported_io_types": { 00:11:46.206 "read": true, 00:11:46.206 "write": true, 00:11:46.206 "unmap": true, 00:11:46.206 "flush": true, 00:11:46.206 "reset": true, 00:11:46.206 "nvme_admin": false, 00:11:46.206 "nvme_io": false, 00:11:46.206 "nvme_io_md": false, 00:11:46.206 "write_zeroes": true, 00:11:46.206 "zcopy": true, 00:11:46.206 "get_zone_info": false, 00:11:46.206 "zone_management": false, 00:11:46.206 "zone_append": false, 00:11:46.206 "compare": false, 00:11:46.206 "compare_and_write": false, 00:11:46.206 "abort": true, 00:11:46.206 "seek_hole": false, 00:11:46.206 "seek_data": false, 00:11:46.206 "copy": true, 00:11:46.206 "nvme_iov_md": false 00:11:46.206 }, 00:11:46.206 "memory_domains": [ 00:11:46.206 { 00:11:46.206 "dma_device_id": "system", 00:11:46.206 "dma_device_type": 1 00:11:46.206 }, 00:11:46.206 { 00:11:46.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.206 "dma_device_type": 2 00:11:46.206 } 00:11:46.206 ], 00:11:46.206 "driver_specific": {} 00:11:46.206 }' 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.206 02:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:46.206 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.466 "name": "BaseBdev2", 00:11:46.466 "aliases": [ 00:11:46.466 "af39514f-4a2e-11ef-9c8e-7947904e2597" 00:11:46.466 ], 00:11:46.466 "product_name": "Malloc disk", 00:11:46.466 "block_size": 512, 00:11:46.466 "num_blocks": 65536, 00:11:46.466 "uuid": "af39514f-4a2e-11ef-9c8e-7947904e2597", 00:11:46.466 "assigned_rate_limits": { 00:11:46.466 "rw_ios_per_sec": 0, 00:11:46.466 "rw_mbytes_per_sec": 0, 00:11:46.466 "r_mbytes_per_sec": 0, 00:11:46.466 "w_mbytes_per_sec": 0 00:11:46.466 }, 00:11:46.466 "claimed": true, 00:11:46.466 "claim_type": "exclusive_write", 00:11:46.466 "zoned": false, 00:11:46.466 "supported_io_types": { 00:11:46.466 "read": true, 00:11:46.466 "write": true, 00:11:46.466 "unmap": true, 00:11:46.466 "flush": true, 00:11:46.466 "reset": true, 00:11:46.466 "nvme_admin": false, 00:11:46.466 "nvme_io": false, 00:11:46.466 "nvme_io_md": false, 00:11:46.466 "write_zeroes": true, 00:11:46.466 "zcopy": true, 00:11:46.466 "get_zone_info": false, 00:11:46.466 "zone_management": false, 00:11:46.466 "zone_append": false, 00:11:46.466 "compare": false, 00:11:46.466 "compare_and_write": false, 00:11:46.466 "abort": true, 00:11:46.466 "seek_hole": false, 00:11:46.466 "seek_data": false, 00:11:46.466 "copy": true, 00:11:46.466 "nvme_iov_md": false 00:11:46.466 }, 00:11:46.466 "memory_domains": [ 00:11:46.466 { 00:11:46.466 "dma_device_id": "system", 00:11:46.466 "dma_device_type": 1 00:11:46.466 }, 00:11:46.466 { 00:11:46.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.466 "dma_device_type": 2 00:11:46.466 } 00:11:46.466 ], 00:11:46.466 "driver_specific": {} 00:11:46.466 }' 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:46.466 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.725 "name": "BaseBdev3", 00:11:46.725 "aliases": [ 00:11:46.725 "af8c540e-4a2e-11ef-9c8e-7947904e2597" 00:11:46.725 ], 00:11:46.725 "product_name": "Malloc disk", 00:11:46.725 "block_size": 512, 00:11:46.725 "num_blocks": 65536, 00:11:46.725 "uuid": "af8c540e-4a2e-11ef-9c8e-7947904e2597", 00:11:46.725 "assigned_rate_limits": { 00:11:46.725 "rw_ios_per_sec": 0, 00:11:46.725 "rw_mbytes_per_sec": 0, 00:11:46.725 "r_mbytes_per_sec": 0, 00:11:46.725 "w_mbytes_per_sec": 0 00:11:46.725 }, 00:11:46.725 "claimed": true, 00:11:46.725 "claim_type": "exclusive_write", 00:11:46.725 "zoned": false, 00:11:46.725 "supported_io_types": { 00:11:46.725 "read": true, 00:11:46.725 "write": true, 00:11:46.725 "unmap": true, 00:11:46.725 "flush": true, 00:11:46.725 "reset": true, 00:11:46.725 "nvme_admin": false, 00:11:46.725 "nvme_io": false, 00:11:46.725 "nvme_io_md": false, 00:11:46.725 "write_zeroes": true, 00:11:46.725 "zcopy": true, 00:11:46.725 "get_zone_info": false, 00:11:46.725 "zone_management": false, 00:11:46.725 "zone_append": false, 00:11:46.725 "compare": false, 00:11:46.725 "compare_and_write": false, 00:11:46.725 "abort": true, 00:11:46.725 "seek_hole": false, 00:11:46.725 "seek_data": false, 00:11:46.725 "copy": true, 00:11:46.725 "nvme_iov_md": false 00:11:46.725 }, 00:11:46.725 "memory_domains": [ 00:11:46.725 { 00:11:46.725 "dma_device_id": "system", 00:11:46.725 "dma_device_type": 1 00:11:46.725 }, 00:11:46.725 { 00:11:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.725 "dma_device_type": 2 00:11:46.725 } 00:11:46.725 ], 00:11:46.725 "driver_specific": {} 00:11:46.725 }' 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:11:46.725 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.984 "name": "BaseBdev4", 00:11:46.984 "aliases": [ 00:11:46.984 "afdf56d0-4a2e-11ef-9c8e-7947904e2597" 00:11:46.984 ], 00:11:46.984 "product_name": "Malloc disk", 00:11:46.984 "block_size": 512, 00:11:46.984 "num_blocks": 65536, 00:11:46.984 "uuid": "afdf56d0-4a2e-11ef-9c8e-7947904e2597", 00:11:46.984 "assigned_rate_limits": { 00:11:46.984 "rw_ios_per_sec": 0, 00:11:46.984 "rw_mbytes_per_sec": 0, 00:11:46.984 "r_mbytes_per_sec": 0, 00:11:46.984 "w_mbytes_per_sec": 0 00:11:46.984 }, 00:11:46.984 "claimed": true, 00:11:46.984 "claim_type": "exclusive_write", 00:11:46.984 "zoned": false, 00:11:46.984 "supported_io_types": { 00:11:46.984 "read": true, 00:11:46.984 "write": true, 00:11:46.984 "unmap": true, 00:11:46.984 "flush": true, 00:11:46.984 "reset": true, 00:11:46.984 "nvme_admin": false, 00:11:46.984 "nvme_io": false, 00:11:46.984 "nvme_io_md": false, 00:11:46.984 "write_zeroes": true, 00:11:46.984 "zcopy": true, 00:11:46.984 "get_zone_info": false, 00:11:46.984 "zone_management": false, 00:11:46.984 "zone_append": false, 00:11:46.984 "compare": false, 00:11:46.984 "compare_and_write": false, 00:11:46.984 "abort": true, 00:11:46.984 "seek_hole": false, 00:11:46.984 "seek_data": false, 00:11:46.984 "copy": true, 00:11:46.984 "nvme_iov_md": false 00:11:46.984 }, 00:11:46.984 "memory_domains": [ 00:11:46.984 { 00:11:46.984 "dma_device_id": "system", 00:11:46.984 "dma_device_type": 1 00:11:46.984 }, 00:11:46.984 { 00:11:46.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.984 "dma_device_type": 2 00:11:46.984 } 00:11:46.984 ], 00:11:46.984 "driver_specific": {} 00:11:46.984 }' 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.984 02:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:47.244 [2024-07-25 02:36:34.018642] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.244 [2024-07-25 02:36:34.018655] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.244 [2024-07-25 02:36:34.018667] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.244 [2024-07-25 02:36:34.018677] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.244 [2024-07-25 02:36:34.018680] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xec079e34f00 name Existed_Raid, state offline 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 58856 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 58856 ']' 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 58856 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 58856 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:47.244 killing process with pid 58856 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58856' 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 58856 00:11:47.244 [2024-07-25 02:36:34.046177] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.244 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 58856 00:11:47.244 [2024-07-25 02:36:34.064591] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.504 02:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:47.504 00:11:47.504 real 0m20.115s 00:11:47.504 user 0m36.030s 00:11:47.504 sys 0m3.603s 00:11:47.504 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.504 02:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.504 ************************************ 00:11:47.504 END TEST raid_state_function_test_sb 00:11:47.504 ************************************ 00:11:47.504 02:36:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:47.504 02:36:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:47.504 02:36:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:47.504 02:36:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.504 02:36:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.504 ************************************ 00:11:47.504 START TEST raid_superblock_test 00:11:47.504 ************************************ 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=59646 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 59646 /var/tmp/spdk-raid.sock 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 59646 ']' 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.504 02:36:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.504 [2024-07-25 02:36:34.306672] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:11:47.504 [2024-07-25 02:36:34.307014] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:48.073 EAL: TSC is not safe to use in SMP mode 00:11:48.073 EAL: TSC is not invariant 00:11:48.073 [2024-07-25 02:36:34.724888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.073 [2024-07-25 02:36:34.817504] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:48.073 [2024-07-25 02:36:34.819116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.074 [2024-07-25 02:36:34.819655] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.074 [2024-07-25 02:36:34.819666] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.333 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:48.593 malloc1 00:11:48.593 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.852 [2024-07-25 02:36:35.518578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.852 [2024-07-25 02:36:35.518614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.852 [2024-07-25 02:36:35.518620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a34780 00:11:48.852 [2024-07-25 02:36:35.518626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.852 [2024-07-25 02:36:35.519275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.852 [2024-07-25 02:36:35.519301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.852 pt1 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.852 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.853 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:48.853 malloc2 00:11:48.853 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.112 [2024-07-25 02:36:35.858595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.112 [2024-07-25 02:36:35.858632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.112 [2024-07-25 02:36:35.858655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a34c80 00:11:49.112 [2024-07-25 02:36:35.858660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.112 [2024-07-25 02:36:35.859070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.112 [2024-07-25 02:36:35.859095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.112 pt2 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.112 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.113 02:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:49.372 malloc3 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.372 [2024-07-25 02:36:36.210616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.372 [2024-07-25 02:36:36.210671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.372 [2024-07-25 02:36:36.210678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a35180 00:11:49.372 [2024-07-25 02:36:36.210684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.372 [2024-07-25 02:36:36.211119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.372 [2024-07-25 02:36:36.211144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.372 pt3 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.372 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:11:49.632 malloc4 00:11:49.632 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.891 [2024-07-25 02:36:36.574636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.891 [2024-07-25 02:36:36.574688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.891 [2024-07-25 02:36:36.574695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a35680 00:11:49.891 [2024-07-25 02:36:36.574701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.891 [2024-07-25 02:36:36.575124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.891 [2024-07-25 02:36:36.575148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.891 pt4 00:11:49.891 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:49.891 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:49.891 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:11:49.891 [2024-07-25 02:36:36.754654] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.891 [2024-07-25 02:36:36.755039] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.892 [2024-07-25 02:36:36.755076] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.892 [2024-07-25 02:36:36.755085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.892 [2024-07-25 02:36:36.755126] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b51f7a35900 00:11:49.892 [2024-07-25 02:36:36.755130] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:49.892 [2024-07-25 02:36:36.755159] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b51f7a97e20 00:11:49.892 [2024-07-25 02:36:36.755208] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b51f7a35900 00:11:49.892 [2024-07-25 02:36:36.755211] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b51f7a35900 00:11:49.892 [2024-07-25 02:36:36.755230] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.892 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.152 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.152 "name": "raid_bdev1", 00:11:50.152 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:50.152 "strip_size_kb": 64, 00:11:50.152 "state": "online", 00:11:50.152 "raid_level": "raid0", 00:11:50.152 "superblock": true, 00:11:50.152 "num_base_bdevs": 4, 00:11:50.152 "num_base_bdevs_discovered": 4, 00:11:50.152 "num_base_bdevs_operational": 4, 00:11:50.152 "base_bdevs_list": [ 00:11:50.152 { 00:11:50.152 "name": "pt1", 00:11:50.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.152 "is_configured": true, 00:11:50.152 "data_offset": 2048, 00:11:50.152 "data_size": 63488 00:11:50.152 }, 00:11:50.152 { 00:11:50.152 "name": "pt2", 00:11:50.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.152 "is_configured": true, 00:11:50.152 "data_offset": 2048, 00:11:50.152 "data_size": 63488 00:11:50.152 }, 00:11:50.152 { 00:11:50.152 "name": "pt3", 00:11:50.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.152 "is_configured": true, 00:11:50.152 "data_offset": 2048, 00:11:50.152 "data_size": 63488 00:11:50.152 }, 00:11:50.152 { 00:11:50.152 "name": "pt4", 00:11:50.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.152 "is_configured": true, 00:11:50.152 "data_offset": 2048, 00:11:50.152 "data_size": 63488 00:11:50.152 } 00:11:50.152 ] 00:11:50.152 }' 00:11:50.152 02:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.152 02:36:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:50.412 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:50.672 [2024-07-25 02:36:37.386720] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:50.672 "name": "raid_bdev1", 00:11:50.672 "aliases": [ 00:11:50.672 "b6cbf87b-4a2e-11ef-9c8e-7947904e2597" 00:11:50.672 ], 00:11:50.672 "product_name": "Raid Volume", 00:11:50.672 "block_size": 512, 00:11:50.672 "num_blocks": 253952, 00:11:50.672 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:50.672 "assigned_rate_limits": { 00:11:50.672 "rw_ios_per_sec": 0, 00:11:50.672 "rw_mbytes_per_sec": 0, 00:11:50.672 "r_mbytes_per_sec": 0, 00:11:50.672 "w_mbytes_per_sec": 0 00:11:50.672 }, 00:11:50.672 "claimed": false, 00:11:50.672 "zoned": false, 00:11:50.672 "supported_io_types": { 00:11:50.672 "read": true, 00:11:50.672 "write": true, 00:11:50.672 "unmap": true, 00:11:50.672 "flush": true, 00:11:50.672 "reset": true, 00:11:50.672 "nvme_admin": false, 00:11:50.672 "nvme_io": false, 00:11:50.672 "nvme_io_md": false, 00:11:50.672 "write_zeroes": true, 00:11:50.672 "zcopy": false, 00:11:50.672 "get_zone_info": false, 00:11:50.672 "zone_management": false, 00:11:50.672 "zone_append": false, 00:11:50.672 "compare": false, 00:11:50.672 "compare_and_write": false, 00:11:50.672 "abort": false, 00:11:50.672 "seek_hole": false, 00:11:50.672 "seek_data": false, 00:11:50.672 "copy": false, 00:11:50.672 "nvme_iov_md": false 00:11:50.672 }, 00:11:50.672 "memory_domains": [ 00:11:50.672 { 00:11:50.672 "dma_device_id": "system", 00:11:50.672 "dma_device_type": 1 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.672 "dma_device_type": 2 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "system", 00:11:50.672 "dma_device_type": 1 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.672 "dma_device_type": 2 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "system", 00:11:50.672 "dma_device_type": 1 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.672 "dma_device_type": 2 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "system", 00:11:50.672 "dma_device_type": 1 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.672 "dma_device_type": 2 00:11:50.672 } 00:11:50.672 ], 00:11:50.672 "driver_specific": { 00:11:50.672 "raid": { 00:11:50.672 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:50.672 "strip_size_kb": 64, 00:11:50.672 "state": "online", 00:11:50.672 "raid_level": "raid0", 00:11:50.672 "superblock": true, 00:11:50.672 "num_base_bdevs": 4, 00:11:50.672 "num_base_bdevs_discovered": 4, 00:11:50.672 "num_base_bdevs_operational": 4, 00:11:50.672 "base_bdevs_list": [ 00:11:50.672 { 00:11:50.672 "name": "pt1", 00:11:50.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": "pt2", 00:11:50.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": "pt3", 00:11:50.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 }, 00:11:50.672 { 00:11:50.672 "name": "pt4", 00:11:50.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.672 "is_configured": true, 00:11:50.672 "data_offset": 2048, 00:11:50.672 "data_size": 63488 00:11:50.672 } 00:11:50.672 ] 00:11:50.672 } 00:11:50.672 } 00:11:50.672 }' 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:50.672 pt2 00:11:50.672 pt3 00:11:50.672 pt4' 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:50.672 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:50.932 "name": "pt1", 00:11:50.932 "aliases": [ 00:11:50.932 "00000000-0000-0000-0000-000000000001" 00:11:50.932 ], 00:11:50.932 "product_name": "passthru", 00:11:50.932 "block_size": 512, 00:11:50.932 "num_blocks": 65536, 00:11:50.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.932 "assigned_rate_limits": { 00:11:50.932 "rw_ios_per_sec": 0, 00:11:50.932 "rw_mbytes_per_sec": 0, 00:11:50.932 "r_mbytes_per_sec": 0, 00:11:50.932 "w_mbytes_per_sec": 0 00:11:50.932 }, 00:11:50.932 "claimed": true, 00:11:50.932 "claim_type": "exclusive_write", 00:11:50.932 "zoned": false, 00:11:50.932 "supported_io_types": { 00:11:50.932 "read": true, 00:11:50.932 "write": true, 00:11:50.932 "unmap": true, 00:11:50.932 "flush": true, 00:11:50.932 "reset": true, 00:11:50.932 "nvme_admin": false, 00:11:50.932 "nvme_io": false, 00:11:50.932 "nvme_io_md": false, 00:11:50.932 "write_zeroes": true, 00:11:50.932 "zcopy": true, 00:11:50.932 "get_zone_info": false, 00:11:50.932 "zone_management": false, 00:11:50.932 "zone_append": false, 00:11:50.932 "compare": false, 00:11:50.932 "compare_and_write": false, 00:11:50.932 "abort": true, 00:11:50.932 "seek_hole": false, 00:11:50.932 "seek_data": false, 00:11:50.932 "copy": true, 00:11:50.932 "nvme_iov_md": false 00:11:50.932 }, 00:11:50.932 "memory_domains": [ 00:11:50.932 { 00:11:50.932 "dma_device_id": "system", 00:11:50.932 "dma_device_type": 1 00:11:50.932 }, 00:11:50.932 { 00:11:50.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.932 "dma_device_type": 2 00:11:50.932 } 00:11:50.932 ], 00:11:50.932 "driver_specific": { 00:11:50.932 "passthru": { 00:11:50.932 "name": "pt1", 00:11:50.932 "base_bdev_name": "malloc1" 00:11:50.932 } 00:11:50.932 } 00:11:50.932 }' 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:50.932 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.192 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.192 "name": "pt2", 00:11:51.192 "aliases": [ 00:11:51.192 "00000000-0000-0000-0000-000000000002" 00:11:51.192 ], 00:11:51.192 "product_name": "passthru", 00:11:51.192 "block_size": 512, 00:11:51.193 "num_blocks": 65536, 00:11:51.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.193 "assigned_rate_limits": { 00:11:51.193 "rw_ios_per_sec": 0, 00:11:51.193 "rw_mbytes_per_sec": 0, 00:11:51.193 "r_mbytes_per_sec": 0, 00:11:51.193 "w_mbytes_per_sec": 0 00:11:51.193 }, 00:11:51.193 "claimed": true, 00:11:51.193 "claim_type": "exclusive_write", 00:11:51.193 "zoned": false, 00:11:51.193 "supported_io_types": { 00:11:51.193 "read": true, 00:11:51.193 "write": true, 00:11:51.193 "unmap": true, 00:11:51.193 "flush": true, 00:11:51.193 "reset": true, 00:11:51.193 "nvme_admin": false, 00:11:51.193 "nvme_io": false, 00:11:51.193 "nvme_io_md": false, 00:11:51.193 "write_zeroes": true, 00:11:51.193 "zcopy": true, 00:11:51.193 "get_zone_info": false, 00:11:51.193 "zone_management": false, 00:11:51.193 "zone_append": false, 00:11:51.193 "compare": false, 00:11:51.193 "compare_and_write": false, 00:11:51.193 "abort": true, 00:11:51.193 "seek_hole": false, 00:11:51.193 "seek_data": false, 00:11:51.193 "copy": true, 00:11:51.193 "nvme_iov_md": false 00:11:51.193 }, 00:11:51.193 "memory_domains": [ 00:11:51.193 { 00:11:51.193 "dma_device_id": "system", 00:11:51.193 "dma_device_type": 1 00:11:51.193 }, 00:11:51.193 { 00:11:51.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.193 "dma_device_type": 2 00:11:51.193 } 00:11:51.193 ], 00:11:51.193 "driver_specific": { 00:11:51.193 "passthru": { 00:11:51.193 "name": "pt2", 00:11:51.193 "base_bdev_name": "malloc2" 00:11:51.193 } 00:11:51.193 } 00:11:51.193 }' 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:51.193 02:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.453 "name": "pt3", 00:11:51.453 "aliases": [ 00:11:51.453 "00000000-0000-0000-0000-000000000003" 00:11:51.453 ], 00:11:51.453 "product_name": "passthru", 00:11:51.453 "block_size": 512, 00:11:51.453 "num_blocks": 65536, 00:11:51.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.453 "assigned_rate_limits": { 00:11:51.453 "rw_ios_per_sec": 0, 00:11:51.453 "rw_mbytes_per_sec": 0, 00:11:51.453 "r_mbytes_per_sec": 0, 00:11:51.453 "w_mbytes_per_sec": 0 00:11:51.453 }, 00:11:51.453 "claimed": true, 00:11:51.453 "claim_type": "exclusive_write", 00:11:51.453 "zoned": false, 00:11:51.453 "supported_io_types": { 00:11:51.453 "read": true, 00:11:51.453 "write": true, 00:11:51.453 "unmap": true, 00:11:51.453 "flush": true, 00:11:51.453 "reset": true, 00:11:51.453 "nvme_admin": false, 00:11:51.453 "nvme_io": false, 00:11:51.453 "nvme_io_md": false, 00:11:51.453 "write_zeroes": true, 00:11:51.453 "zcopy": true, 00:11:51.453 "get_zone_info": false, 00:11:51.453 "zone_management": false, 00:11:51.453 "zone_append": false, 00:11:51.453 "compare": false, 00:11:51.453 "compare_and_write": false, 00:11:51.453 "abort": true, 00:11:51.453 "seek_hole": false, 00:11:51.453 "seek_data": false, 00:11:51.453 "copy": true, 00:11:51.453 "nvme_iov_md": false 00:11:51.453 }, 00:11:51.453 "memory_domains": [ 00:11:51.453 { 00:11:51.453 "dma_device_id": "system", 00:11:51.453 "dma_device_type": 1 00:11:51.453 }, 00:11:51.453 { 00:11:51.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.453 "dma_device_type": 2 00:11:51.453 } 00:11:51.453 ], 00:11:51.453 "driver_specific": { 00:11:51.453 "passthru": { 00:11:51.453 "name": "pt3", 00:11:51.453 "base_bdev_name": "malloc3" 00:11:51.453 } 00:11:51.453 } 00:11:51.453 }' 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:11:51.453 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.713 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.713 "name": "pt4", 00:11:51.713 "aliases": [ 00:11:51.713 "00000000-0000-0000-0000-000000000004" 00:11:51.713 ], 00:11:51.714 "product_name": "passthru", 00:11:51.714 "block_size": 512, 00:11:51.714 "num_blocks": 65536, 00:11:51.714 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.714 "assigned_rate_limits": { 00:11:51.714 "rw_ios_per_sec": 0, 00:11:51.714 "rw_mbytes_per_sec": 0, 00:11:51.714 "r_mbytes_per_sec": 0, 00:11:51.714 "w_mbytes_per_sec": 0 00:11:51.714 }, 00:11:51.714 "claimed": true, 00:11:51.714 "claim_type": "exclusive_write", 00:11:51.714 "zoned": false, 00:11:51.714 "supported_io_types": { 00:11:51.714 "read": true, 00:11:51.714 "write": true, 00:11:51.714 "unmap": true, 00:11:51.714 "flush": true, 00:11:51.714 "reset": true, 00:11:51.714 "nvme_admin": false, 00:11:51.714 "nvme_io": false, 00:11:51.714 "nvme_io_md": false, 00:11:51.714 "write_zeroes": true, 00:11:51.714 "zcopy": true, 00:11:51.714 "get_zone_info": false, 00:11:51.714 "zone_management": false, 00:11:51.714 "zone_append": false, 00:11:51.714 "compare": false, 00:11:51.714 "compare_and_write": false, 00:11:51.714 "abort": true, 00:11:51.714 "seek_hole": false, 00:11:51.714 "seek_data": false, 00:11:51.714 "copy": true, 00:11:51.714 "nvme_iov_md": false 00:11:51.714 }, 00:11:51.714 "memory_domains": [ 00:11:51.714 { 00:11:51.714 "dma_device_id": "system", 00:11:51.714 "dma_device_type": 1 00:11:51.714 }, 00:11:51.714 { 00:11:51.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.714 "dma_device_type": 2 00:11:51.714 } 00:11:51.714 ], 00:11:51.714 "driver_specific": { 00:11:51.714 "passthru": { 00:11:51.714 "name": "pt4", 00:11:51.714 "base_bdev_name": "malloc4" 00:11:51.714 } 00:11:51.714 } 00:11:51.714 }' 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:51.714 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:51.974 [2024-07-25 02:36:38.686824] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.974 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b6cbf87b-4a2e-11ef-9c8e-7947904e2597 00:11:51.974 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z b6cbf87b-4a2e-11ef-9c8e-7947904e2597 ']' 00:11:51.974 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:51.974 [2024-07-25 02:36:38.870811] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.974 [2024-07-25 02:36:38.870822] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.974 [2024-07-25 02:36:38.870835] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.974 [2024-07-25 02:36:38.870846] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.974 [2024-07-25 02:36:38.870849] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b51f7a35900 name raid_bdev1, state offline 00:11:52.233 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.233 02:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:52.233 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:52.233 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:52.233 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.233 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:52.493 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.493 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:52.753 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.753 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:52.753 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.753 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:11:53.013 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:53.013 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:53.272 02:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:53.272 [2024-07-25 02:36:40.138897] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:53.272 [2024-07-25 02:36:40.139352] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:53.272 [2024-07-25 02:36:40.139369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:53.273 [2024-07-25 02:36:40.139375] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:53.273 [2024-07-25 02:36:40.139387] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:53.273 [2024-07-25 02:36:40.139414] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:53.273 [2024-07-25 02:36:40.139421] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:53.273 [2024-07-25 02:36:40.139427] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:53.273 [2024-07-25 02:36:40.139433] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.273 [2024-07-25 02:36:40.139437] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b51f7a35680 name raid_bdev1, state configuring 00:11:53.273 request: 00:11:53.273 { 00:11:53.273 "name": "raid_bdev1", 00:11:53.273 "raid_level": "raid0", 00:11:53.273 "base_bdevs": [ 00:11:53.273 "malloc1", 00:11:53.273 "malloc2", 00:11:53.273 "malloc3", 00:11:53.273 "malloc4" 00:11:53.273 ], 00:11:53.273 "strip_size_kb": 64, 00:11:53.273 "superblock": false, 00:11:53.273 "method": "bdev_raid_create", 00:11:53.273 "req_id": 1 00:11:53.273 } 00:11:53.273 Got JSON-RPC error response 00:11:53.273 response: 00:11:53.273 { 00:11:53.273 "code": -17, 00:11:53.273 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:53.273 } 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:53.273 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.533 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:53.533 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:53.533 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.793 [2024-07-25 02:36:40.498917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.793 [2024-07-25 02:36:40.498949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.793 [2024-07-25 02:36:40.498956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a35180 00:11:53.793 [2024-07-25 02:36:40.498961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.793 [2024-07-25 02:36:40.499451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.793 [2024-07-25 02:36:40.499476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.793 [2024-07-25 02:36:40.499493] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:53.793 [2024-07-25 02:36:40.499502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.793 pt1 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.793 "name": "raid_bdev1", 00:11:53.793 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:53.793 "strip_size_kb": 64, 00:11:53.793 "state": "configuring", 00:11:53.793 "raid_level": "raid0", 00:11:53.793 "superblock": true, 00:11:53.793 "num_base_bdevs": 4, 00:11:53.793 "num_base_bdevs_discovered": 1, 00:11:53.793 "num_base_bdevs_operational": 4, 00:11:53.793 "base_bdevs_list": [ 00:11:53.793 { 00:11:53.793 "name": "pt1", 00:11:53.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.793 "is_configured": true, 00:11:53.793 "data_offset": 2048, 00:11:53.793 "data_size": 63488 00:11:53.793 }, 00:11:53.793 { 00:11:53.793 "name": null, 00:11:53.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.793 "is_configured": false, 00:11:53.793 "data_offset": 2048, 00:11:53.793 "data_size": 63488 00:11:53.793 }, 00:11:53.793 { 00:11:53.793 "name": null, 00:11:53.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.793 "is_configured": false, 00:11:53.793 "data_offset": 2048, 00:11:53.793 "data_size": 63488 00:11:53.793 }, 00:11:53.793 { 00:11:53.793 "name": null, 00:11:53.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.793 "is_configured": false, 00:11:53.793 "data_offset": 2048, 00:11:53.793 "data_size": 63488 00:11:53.793 } 00:11:53.793 ] 00:11:53.793 }' 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.793 02:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.392 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:11:54.392 02:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.392 [2024-07-25 02:36:41.134965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.392 [2024-07-25 02:36:41.134994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.392 [2024-07-25 02:36:41.135002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a34780 00:11:54.392 [2024-07-25 02:36:41.135007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.392 [2024-07-25 02:36:41.135097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.392 [2024-07-25 02:36:41.135104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.392 [2024-07-25 02:36:41.135116] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.392 [2024-07-25 02:36:41.135122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.392 pt2 00:11:54.392 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:54.678 [2024-07-25 02:36:41.318972] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.678 "name": "raid_bdev1", 00:11:54.678 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:54.678 "strip_size_kb": 64, 00:11:54.678 "state": "configuring", 00:11:54.678 "raid_level": "raid0", 00:11:54.678 "superblock": true, 00:11:54.678 "num_base_bdevs": 4, 00:11:54.678 "num_base_bdevs_discovered": 1, 00:11:54.678 "num_base_bdevs_operational": 4, 00:11:54.678 "base_bdevs_list": [ 00:11:54.678 { 00:11:54.678 "name": "pt1", 00:11:54.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.678 "is_configured": true, 00:11:54.678 "data_offset": 2048, 00:11:54.678 "data_size": 63488 00:11:54.678 }, 00:11:54.678 { 00:11:54.678 "name": null, 00:11:54.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.678 "is_configured": false, 00:11:54.678 "data_offset": 2048, 00:11:54.678 "data_size": 63488 00:11:54.678 }, 00:11:54.678 { 00:11:54.678 "name": null, 00:11:54.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.678 "is_configured": false, 00:11:54.678 "data_offset": 2048, 00:11:54.678 "data_size": 63488 00:11:54.678 }, 00:11:54.678 { 00:11:54.678 "name": null, 00:11:54.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.678 "is_configured": false, 00:11:54.678 "data_offset": 2048, 00:11:54.678 "data_size": 63488 00:11:54.678 } 00:11:54.678 ] 00:11:54.678 }' 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.678 02:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:54.938 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:54.938 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.197 [2024-07-25 02:36:41.931012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.198 [2024-07-25 02:36:41.931038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.198 [2024-07-25 02:36:41.931060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a34780 00:11:55.198 [2024-07-25 02:36:41.931066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.198 [2024-07-25 02:36:41.931132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.198 [2024-07-25 02:36:41.931139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:55.198 [2024-07-25 02:36:41.931152] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:55.198 [2024-07-25 02:36:41.931157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.198 pt2 00:11:55.198 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:55.198 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:55.198 02:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:55.458 [2024-07-25 02:36:42.111024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:55.458 [2024-07-25 02:36:42.111048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.458 [2024-07-25 02:36:42.111071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a35b80 00:11:55.458 [2024-07-25 02:36:42.111076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.458 [2024-07-25 02:36:42.111128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.458 [2024-07-25 02:36:42.111134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:55.458 [2024-07-25 02:36:42.111145] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:55.458 [2024-07-25 02:36:42.111151] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.458 pt3 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:55.458 [2024-07-25 02:36:42.291034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:55.458 [2024-07-25 02:36:42.291058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.458 [2024-07-25 02:36:42.291081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b51f7a35900 00:11:55.458 [2024-07-25 02:36:42.291086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.458 [2024-07-25 02:36:42.291136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.458 [2024-07-25 02:36:42.291142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:55.458 [2024-07-25 02:36:42.291153] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:55.458 [2024-07-25 02:36:42.291158] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:55.458 [2024-07-25 02:36:42.291177] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b51f7a34c80 00:11:55.458 [2024-07-25 02:36:42.291179] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:55.458 [2024-07-25 02:36:42.291194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b51f7a97e20 00:11:55.458 [2024-07-25 02:36:42.291245] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b51f7a34c80 00:11:55.458 [2024-07-25 02:36:42.291253] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b51f7a34c80 00:11:55.458 [2024-07-25 02:36:42.291268] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.458 pt4 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.458 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.718 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.718 "name": "raid_bdev1", 00:11:55.718 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:55.718 "strip_size_kb": 64, 00:11:55.718 "state": "online", 00:11:55.718 "raid_level": "raid0", 00:11:55.718 "superblock": true, 00:11:55.718 "num_base_bdevs": 4, 00:11:55.718 "num_base_bdevs_discovered": 4, 00:11:55.718 "num_base_bdevs_operational": 4, 00:11:55.718 "base_bdevs_list": [ 00:11:55.718 { 00:11:55.718 "name": "pt1", 00:11:55.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.718 "is_configured": true, 00:11:55.718 "data_offset": 2048, 00:11:55.718 "data_size": 63488 00:11:55.718 }, 00:11:55.718 { 00:11:55.718 "name": "pt2", 00:11:55.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.718 "is_configured": true, 00:11:55.718 "data_offset": 2048, 00:11:55.718 "data_size": 63488 00:11:55.718 }, 00:11:55.718 { 00:11:55.718 "name": "pt3", 00:11:55.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.718 "is_configured": true, 00:11:55.718 "data_offset": 2048, 00:11:55.718 "data_size": 63488 00:11:55.718 }, 00:11:55.718 { 00:11:55.718 "name": "pt4", 00:11:55.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.718 "is_configured": true, 00:11:55.718 "data_offset": 2048, 00:11:55.718 "data_size": 63488 00:11:55.718 } 00:11:55.718 ] 00:11:55.718 }' 00:11:55.718 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.718 02:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:55.978 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:56.239 [2024-07-25 02:36:42.911113] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:56.239 "name": "raid_bdev1", 00:11:56.239 "aliases": [ 00:11:56.239 "b6cbf87b-4a2e-11ef-9c8e-7947904e2597" 00:11:56.239 ], 00:11:56.239 "product_name": "Raid Volume", 00:11:56.239 "block_size": 512, 00:11:56.239 "num_blocks": 253952, 00:11:56.239 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:56.239 "assigned_rate_limits": { 00:11:56.239 "rw_ios_per_sec": 0, 00:11:56.239 "rw_mbytes_per_sec": 0, 00:11:56.239 "r_mbytes_per_sec": 0, 00:11:56.239 "w_mbytes_per_sec": 0 00:11:56.239 }, 00:11:56.239 "claimed": false, 00:11:56.239 "zoned": false, 00:11:56.239 "supported_io_types": { 00:11:56.239 "read": true, 00:11:56.239 "write": true, 00:11:56.239 "unmap": true, 00:11:56.239 "flush": true, 00:11:56.239 "reset": true, 00:11:56.239 "nvme_admin": false, 00:11:56.239 "nvme_io": false, 00:11:56.239 "nvme_io_md": false, 00:11:56.239 "write_zeroes": true, 00:11:56.239 "zcopy": false, 00:11:56.239 "get_zone_info": false, 00:11:56.239 "zone_management": false, 00:11:56.239 "zone_append": false, 00:11:56.239 "compare": false, 00:11:56.239 "compare_and_write": false, 00:11:56.239 "abort": false, 00:11:56.239 "seek_hole": false, 00:11:56.239 "seek_data": false, 00:11:56.239 "copy": false, 00:11:56.239 "nvme_iov_md": false 00:11:56.239 }, 00:11:56.239 "memory_domains": [ 00:11:56.239 { 00:11:56.239 "dma_device_id": "system", 00:11:56.239 "dma_device_type": 1 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.239 "dma_device_type": 2 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "system", 00:11:56.239 "dma_device_type": 1 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.239 "dma_device_type": 2 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "system", 00:11:56.239 "dma_device_type": 1 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.239 "dma_device_type": 2 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "system", 00:11:56.239 "dma_device_type": 1 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.239 "dma_device_type": 2 00:11:56.239 } 00:11:56.239 ], 00:11:56.239 "driver_specific": { 00:11:56.239 "raid": { 00:11:56.239 "uuid": "b6cbf87b-4a2e-11ef-9c8e-7947904e2597", 00:11:56.239 "strip_size_kb": 64, 00:11:56.239 "state": "online", 00:11:56.239 "raid_level": "raid0", 00:11:56.239 "superblock": true, 00:11:56.239 "num_base_bdevs": 4, 00:11:56.239 "num_base_bdevs_discovered": 4, 00:11:56.239 "num_base_bdevs_operational": 4, 00:11:56.239 "base_bdevs_list": [ 00:11:56.239 { 00:11:56.239 "name": "pt1", 00:11:56.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.239 "is_configured": true, 00:11:56.239 "data_offset": 2048, 00:11:56.239 "data_size": 63488 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "name": "pt2", 00:11:56.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.239 "is_configured": true, 00:11:56.239 "data_offset": 2048, 00:11:56.239 "data_size": 63488 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "name": "pt3", 00:11:56.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.239 "is_configured": true, 00:11:56.239 "data_offset": 2048, 00:11:56.239 "data_size": 63488 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "name": "pt4", 00:11:56.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.239 "is_configured": true, 00:11:56.239 "data_offset": 2048, 00:11:56.239 "data_size": 63488 00:11:56.239 } 00:11:56.239 ] 00:11:56.239 } 00:11:56.239 } 00:11:56.239 }' 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:56.239 pt2 00:11:56.239 pt3 00:11:56.239 pt4' 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:56.239 02:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.239 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.239 "name": "pt1", 00:11:56.239 "aliases": [ 00:11:56.239 "00000000-0000-0000-0000-000000000001" 00:11:56.239 ], 00:11:56.239 "product_name": "passthru", 00:11:56.239 "block_size": 512, 00:11:56.239 "num_blocks": 65536, 00:11:56.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.239 "assigned_rate_limits": { 00:11:56.239 "rw_ios_per_sec": 0, 00:11:56.239 "rw_mbytes_per_sec": 0, 00:11:56.239 "r_mbytes_per_sec": 0, 00:11:56.239 "w_mbytes_per_sec": 0 00:11:56.239 }, 00:11:56.239 "claimed": true, 00:11:56.239 "claim_type": "exclusive_write", 00:11:56.239 "zoned": false, 00:11:56.239 "supported_io_types": { 00:11:56.239 "read": true, 00:11:56.239 "write": true, 00:11:56.239 "unmap": true, 00:11:56.239 "flush": true, 00:11:56.239 "reset": true, 00:11:56.239 "nvme_admin": false, 00:11:56.239 "nvme_io": false, 00:11:56.239 "nvme_io_md": false, 00:11:56.239 "write_zeroes": true, 00:11:56.239 "zcopy": true, 00:11:56.239 "get_zone_info": false, 00:11:56.239 "zone_management": false, 00:11:56.239 "zone_append": false, 00:11:56.239 "compare": false, 00:11:56.239 "compare_and_write": false, 00:11:56.239 "abort": true, 00:11:56.239 "seek_hole": false, 00:11:56.239 "seek_data": false, 00:11:56.239 "copy": true, 00:11:56.239 "nvme_iov_md": false 00:11:56.239 }, 00:11:56.239 "memory_domains": [ 00:11:56.239 { 00:11:56.239 "dma_device_id": "system", 00:11:56.239 "dma_device_type": 1 00:11:56.239 }, 00:11:56.239 { 00:11:56.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.239 "dma_device_type": 2 00:11:56.239 } 00:11:56.239 ], 00:11:56.239 "driver_specific": { 00:11:56.239 "passthru": { 00:11:56.239 "name": "pt1", 00:11:56.239 "base_bdev_name": "malloc1" 00:11:56.239 } 00:11:56.239 } 00:11:56.239 }' 00:11:56.239 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.239 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.239 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.239 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.499 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.499 "name": "pt2", 00:11:56.499 "aliases": [ 00:11:56.499 "00000000-0000-0000-0000-000000000002" 00:11:56.499 ], 00:11:56.499 "product_name": "passthru", 00:11:56.499 "block_size": 512, 00:11:56.499 "num_blocks": 65536, 00:11:56.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.499 "assigned_rate_limits": { 00:11:56.499 "rw_ios_per_sec": 0, 00:11:56.499 "rw_mbytes_per_sec": 0, 00:11:56.499 "r_mbytes_per_sec": 0, 00:11:56.499 "w_mbytes_per_sec": 0 00:11:56.499 }, 00:11:56.499 "claimed": true, 00:11:56.500 "claim_type": "exclusive_write", 00:11:56.500 "zoned": false, 00:11:56.500 "supported_io_types": { 00:11:56.500 "read": true, 00:11:56.500 "write": true, 00:11:56.500 "unmap": true, 00:11:56.500 "flush": true, 00:11:56.500 "reset": true, 00:11:56.500 "nvme_admin": false, 00:11:56.500 "nvme_io": false, 00:11:56.500 "nvme_io_md": false, 00:11:56.500 "write_zeroes": true, 00:11:56.500 "zcopy": true, 00:11:56.500 "get_zone_info": false, 00:11:56.500 "zone_management": false, 00:11:56.500 "zone_append": false, 00:11:56.500 "compare": false, 00:11:56.500 "compare_and_write": false, 00:11:56.500 "abort": true, 00:11:56.500 "seek_hole": false, 00:11:56.500 "seek_data": false, 00:11:56.500 "copy": true, 00:11:56.500 "nvme_iov_md": false 00:11:56.500 }, 00:11:56.500 "memory_domains": [ 00:11:56.500 { 00:11:56.500 "dma_device_id": "system", 00:11:56.500 "dma_device_type": 1 00:11:56.500 }, 00:11:56.500 { 00:11:56.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.500 "dma_device_type": 2 00:11:56.500 } 00:11:56.500 ], 00:11:56.500 "driver_specific": { 00:11:56.500 "passthru": { 00:11:56.500 "name": "pt2", 00:11:56.500 "base_bdev_name": "malloc2" 00:11:56.500 } 00:11:56.500 } 00:11:56.500 }' 00:11:56.500 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.500 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.759 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.759 "name": "pt3", 00:11:56.759 "aliases": [ 00:11:56.759 "00000000-0000-0000-0000-000000000003" 00:11:56.759 ], 00:11:56.759 "product_name": "passthru", 00:11:56.759 "block_size": 512, 00:11:56.759 "num_blocks": 65536, 00:11:56.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.759 "assigned_rate_limits": { 00:11:56.759 "rw_ios_per_sec": 0, 00:11:56.759 "rw_mbytes_per_sec": 0, 00:11:56.759 "r_mbytes_per_sec": 0, 00:11:56.759 "w_mbytes_per_sec": 0 00:11:56.759 }, 00:11:56.759 "claimed": true, 00:11:56.759 "claim_type": "exclusive_write", 00:11:56.759 "zoned": false, 00:11:56.759 "supported_io_types": { 00:11:56.759 "read": true, 00:11:56.759 "write": true, 00:11:56.759 "unmap": true, 00:11:56.759 "flush": true, 00:11:56.759 "reset": true, 00:11:56.759 "nvme_admin": false, 00:11:56.759 "nvme_io": false, 00:11:56.759 "nvme_io_md": false, 00:11:56.759 "write_zeroes": true, 00:11:56.759 "zcopy": true, 00:11:56.759 "get_zone_info": false, 00:11:56.759 "zone_management": false, 00:11:56.759 "zone_append": false, 00:11:56.759 "compare": false, 00:11:56.759 "compare_and_write": false, 00:11:56.759 "abort": true, 00:11:56.759 "seek_hole": false, 00:11:56.759 "seek_data": false, 00:11:56.759 "copy": true, 00:11:56.759 "nvme_iov_md": false 00:11:56.759 }, 00:11:56.759 "memory_domains": [ 00:11:56.759 { 00:11:56.759 "dma_device_id": "system", 00:11:56.759 "dma_device_type": 1 00:11:56.759 }, 00:11:56.759 { 00:11:56.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.759 "dma_device_type": 2 00:11:56.759 } 00:11:56.759 ], 00:11:56.759 "driver_specific": { 00:11:56.759 "passthru": { 00:11:56.759 "name": "pt3", 00:11:56.759 "base_bdev_name": "malloc3" 00:11:56.759 } 00:11:56.759 } 00:11:56.759 }' 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:11:57.017 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:57.276 "name": "pt4", 00:11:57.276 "aliases": [ 00:11:57.276 "00000000-0000-0000-0000-000000000004" 00:11:57.276 ], 00:11:57.276 "product_name": "passthru", 00:11:57.276 "block_size": 512, 00:11:57.276 "num_blocks": 65536, 00:11:57.276 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.276 "assigned_rate_limits": { 00:11:57.276 "rw_ios_per_sec": 0, 00:11:57.276 "rw_mbytes_per_sec": 0, 00:11:57.276 "r_mbytes_per_sec": 0, 00:11:57.276 "w_mbytes_per_sec": 0 00:11:57.276 }, 00:11:57.276 "claimed": true, 00:11:57.276 "claim_type": "exclusive_write", 00:11:57.276 "zoned": false, 00:11:57.276 "supported_io_types": { 00:11:57.276 "read": true, 00:11:57.276 "write": true, 00:11:57.276 "unmap": true, 00:11:57.276 "flush": true, 00:11:57.276 "reset": true, 00:11:57.276 "nvme_admin": false, 00:11:57.276 "nvme_io": false, 00:11:57.276 "nvme_io_md": false, 00:11:57.276 "write_zeroes": true, 00:11:57.276 "zcopy": true, 00:11:57.276 "get_zone_info": false, 00:11:57.276 "zone_management": false, 00:11:57.276 "zone_append": false, 00:11:57.276 "compare": false, 00:11:57.276 "compare_and_write": false, 00:11:57.276 "abort": true, 00:11:57.276 "seek_hole": false, 00:11:57.276 "seek_data": false, 00:11:57.276 "copy": true, 00:11:57.276 "nvme_iov_md": false 00:11:57.276 }, 00:11:57.276 "memory_domains": [ 00:11:57.276 { 00:11:57.276 "dma_device_id": "system", 00:11:57.276 "dma_device_type": 1 00:11:57.276 }, 00:11:57.276 { 00:11:57.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.276 "dma_device_type": 2 00:11:57.276 } 00:11:57.276 ], 00:11:57.276 "driver_specific": { 00:11:57.276 "passthru": { 00:11:57.276 "name": "pt4", 00:11:57.276 "base_bdev_name": "malloc4" 00:11:57.276 } 00:11:57.276 } 00:11:57.276 }' 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.276 02:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:57.276 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:57.535 [2024-07-25 02:36:44.191220] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' b6cbf87b-4a2e-11ef-9c8e-7947904e2597 '!=' b6cbf87b-4a2e-11ef-9c8e-7947904e2597 ']' 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 59646 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 59646 ']' 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 59646 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 59646 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:57.535 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:57.536 killing process with pid 59646 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59646' 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 59646 00:11:57.536 [2024-07-25 02:36:44.221718] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.536 [2024-07-25 02:36:44.221735] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.536 [2024-07-25 02:36:44.221757] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.536 [2024-07-25 02:36:44.221760] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b51f7a34c80 name raid_bdev1, state offline 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 59646 00:11:57.536 [2024-07-25 02:36:44.240215] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:57.536 00:11:57.536 real 0m10.114s 00:11:57.536 user 0m17.572s 00:11:57.536 sys 0m1.940s 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.536 02:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.536 ************************************ 00:11:57.536 END TEST raid_superblock_test 00:11:57.536 ************************************ 00:11:57.796 02:36:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:57.796 02:36:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:57.796 02:36:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:57.796 02:36:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.796 02:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.796 ************************************ 00:11:57.796 START TEST raid_read_error_test 00:11:57.796 ************************************ 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.9qMgwakr4U 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60035 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60035 /var/tmp/spdk-raid.sock 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60035 ']' 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.796 02:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.796 [2024-07-25 02:36:44.494336] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:11:57.796 [2024-07-25 02:36:44.494663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:58.055 EAL: TSC is not safe to use in SMP mode 00:11:58.055 EAL: TSC is not invariant 00:11:58.055 [2024-07-25 02:36:44.910714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.314 [2024-07-25 02:36:45.003876] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:58.314 [2024-07-25 02:36:45.005524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.314 [2024-07-25 02:36:45.006085] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.314 [2024-07-25 02:36:45.006096] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.574 02:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.574 02:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:58.574 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:58.574 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.833 BaseBdev1_malloc 00:11:58.833 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:58.833 true 00:11:58.833 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:59.093 [2024-07-25 02:36:45.892930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:59.093 [2024-07-25 02:36:45.892972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.093 [2024-07-25 02:36:45.893008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2fa155e34780 00:11:59.093 [2024-07-25 02:36:45.893014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.093 [2024-07-25 02:36:45.893457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.093 [2024-07-25 02:36:45.893483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.093 BaseBdev1 00:11:59.093 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:59.093 02:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.352 BaseBdev2_malloc 00:11:59.352 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:59.612 true 00:11:59.612 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:59.612 [2024-07-25 02:36:46.412989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:59.612 [2024-07-25 02:36:46.413029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.612 [2024-07-25 02:36:46.413048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2fa155e34c80 00:11:59.612 [2024-07-25 02:36:46.413054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.612 [2024-07-25 02:36:46.413494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.612 [2024-07-25 02:36:46.413522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:59.612 BaseBdev2 00:11:59.612 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:59.612 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:59.872 BaseBdev3_malloc 00:11:59.872 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:59.872 true 00:11:59.872 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:00.131 [2024-07-25 02:36:46.921028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:00.131 [2024-07-25 02:36:46.921067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.131 [2024-07-25 02:36:46.921087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2fa155e35180 00:12:00.131 [2024-07-25 02:36:46.921093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.131 [2024-07-25 02:36:46.921532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.131 [2024-07-25 02:36:46.921559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:00.131 BaseBdev3 00:12:00.131 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:00.131 02:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:00.390 BaseBdev4_malloc 00:12:00.390 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:12:00.390 true 00:12:00.390 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:00.649 [2024-07-25 02:36:47.461087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:00.649 [2024-07-25 02:36:47.461123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.649 [2024-07-25 02:36:47.461141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2fa155e35680 00:12:00.649 [2024-07-25 02:36:47.461146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.649 [2024-07-25 02:36:47.461583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.649 [2024-07-25 02:36:47.461609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:00.649 BaseBdev4 00:12:00.649 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:12:00.909 [2024-07-25 02:36:47.641109] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.909 [2024-07-25 02:36:47.641496] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.909 [2024-07-25 02:36:47.641517] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.909 [2024-07-25 02:36:47.641546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.909 [2024-07-25 02:36:47.641599] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2fa155e35900 00:12:00.909 [2024-07-25 02:36:47.641609] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:00.909 [2024-07-25 02:36:47.641641] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2fa155ea0e20 00:12:00.909 [2024-07-25 02:36:47.641699] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2fa155e35900 00:12:00.909 [2024-07-25 02:36:47.641706] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2fa155e35900 00:12:00.909 [2024-07-25 02:36:47.641724] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.909 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.168 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.168 "name": "raid_bdev1", 00:12:01.168 "uuid": "bd491ca5-4a2e-11ef-9c8e-7947904e2597", 00:12:01.168 "strip_size_kb": 64, 00:12:01.168 "state": "online", 00:12:01.168 "raid_level": "raid0", 00:12:01.168 "superblock": true, 00:12:01.168 "num_base_bdevs": 4, 00:12:01.168 "num_base_bdevs_discovered": 4, 00:12:01.168 "num_base_bdevs_operational": 4, 00:12:01.168 "base_bdevs_list": [ 00:12:01.168 { 00:12:01.168 "name": "BaseBdev1", 00:12:01.168 "uuid": "28a74137-7806-0a51-bf0d-ab672b6c05a9", 00:12:01.168 "is_configured": true, 00:12:01.168 "data_offset": 2048, 00:12:01.168 "data_size": 63488 00:12:01.168 }, 00:12:01.168 { 00:12:01.168 "name": "BaseBdev2", 00:12:01.168 "uuid": "abfaaf7c-a7a2-005e-8811-1b25aa275699", 00:12:01.168 "is_configured": true, 00:12:01.168 "data_offset": 2048, 00:12:01.168 "data_size": 63488 00:12:01.168 }, 00:12:01.168 { 00:12:01.168 "name": "BaseBdev3", 00:12:01.168 "uuid": "809844d1-e651-1f54-b462-9196431ef042", 00:12:01.168 "is_configured": true, 00:12:01.168 "data_offset": 2048, 00:12:01.168 "data_size": 63488 00:12:01.169 }, 00:12:01.169 { 00:12:01.169 "name": "BaseBdev4", 00:12:01.169 "uuid": "8b9aa362-9620-585e-b08b-d22d548b3040", 00:12:01.169 "is_configured": true, 00:12:01.169 "data_offset": 2048, 00:12:01.169 "data_size": 63488 00:12:01.169 } 00:12:01.169 ] 00:12:01.169 }' 00:12:01.169 02:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.169 02:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.428 02:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:01.428 02:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:01.428 [2024-07-25 02:36:48.185182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2fa155ea0ec0 00:12:02.366 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.625 "name": "raid_bdev1", 00:12:02.625 "uuid": "bd491ca5-4a2e-11ef-9c8e-7947904e2597", 00:12:02.625 "strip_size_kb": 64, 00:12:02.625 "state": "online", 00:12:02.625 "raid_level": "raid0", 00:12:02.625 "superblock": true, 00:12:02.625 "num_base_bdevs": 4, 00:12:02.625 "num_base_bdevs_discovered": 4, 00:12:02.625 "num_base_bdevs_operational": 4, 00:12:02.625 "base_bdevs_list": [ 00:12:02.625 { 00:12:02.625 "name": "BaseBdev1", 00:12:02.625 "uuid": "28a74137-7806-0a51-bf0d-ab672b6c05a9", 00:12:02.625 "is_configured": true, 00:12:02.625 "data_offset": 2048, 00:12:02.625 "data_size": 63488 00:12:02.625 }, 00:12:02.625 { 00:12:02.625 "name": "BaseBdev2", 00:12:02.625 "uuid": "abfaaf7c-a7a2-005e-8811-1b25aa275699", 00:12:02.625 "is_configured": true, 00:12:02.625 "data_offset": 2048, 00:12:02.625 "data_size": 63488 00:12:02.625 }, 00:12:02.625 { 00:12:02.625 "name": "BaseBdev3", 00:12:02.625 "uuid": "809844d1-e651-1f54-b462-9196431ef042", 00:12:02.625 "is_configured": true, 00:12:02.625 "data_offset": 2048, 00:12:02.625 "data_size": 63488 00:12:02.625 }, 00:12:02.625 { 00:12:02.625 "name": "BaseBdev4", 00:12:02.625 "uuid": "8b9aa362-9620-585e-b08b-d22d548b3040", 00:12:02.625 "is_configured": true, 00:12:02.625 "data_offset": 2048, 00:12:02.625 "data_size": 63488 00:12:02.625 } 00:12:02.625 ] 00:12:02.625 }' 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.625 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.885 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:03.145 [2024-07-25 02:36:49.949513] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.145 [2024-07-25 02:36:49.949540] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.145 [2024-07-25 02:36:49.949808] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.145 [2024-07-25 02:36:49.949821] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.145 [2024-07-25 02:36:49.949829] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.145 [2024-07-25 02:36:49.949833] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2fa155e35900 name raid_bdev1, state offline 00:12:03.145 0 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60035 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60035 ']' 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60035 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60035 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:03.145 killing process with pid 60035 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60035' 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60035 00:12:03.145 [2024-07-25 02:36:49.978916] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.145 02:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60035 00:12:03.145 [2024-07-25 02:36:49.997148] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.9qMgwakr4U 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.57 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.57 != \0\.\0\0 ]] 00:12:03.405 00:12:03.405 real 0m5.703s 00:12:03.405 user 0m8.755s 00:12:03.405 sys 0m0.950s 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.405 02:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.405 ************************************ 00:12:03.405 END TEST raid_read_error_test 00:12:03.405 ************************************ 00:12:03.405 02:36:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:03.405 02:36:50 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:03.405 02:36:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:03.405 02:36:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.405 02:36:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.405 ************************************ 00:12:03.405 START TEST raid_write_error_test 00:12:03.405 ************************************ 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.5cfeo1SBIT 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60165 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60165 /var/tmp/spdk-raid.sock 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60165 ']' 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.405 02:36:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.405 [2024-07-25 02:36:50.258546] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:12:03.405 [2024-07-25 02:36:50.258846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:03.975 EAL: TSC is not safe to use in SMP mode 00:12:03.975 EAL: TSC is not invariant 00:12:03.975 [2024-07-25 02:36:50.672533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.975 [2024-07-25 02:36:50.763745] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:03.975 [2024-07-25 02:36:50.765402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.975 [2024-07-25 02:36:50.765992] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.975 [2024-07-25 02:36:50.766002] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.545 02:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.545 02:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:04.545 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:04.545 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.545 BaseBdev1_malloc 00:12:04.545 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:04.805 true 00:12:04.805 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:04.805 [2024-07-25 02:36:51.688800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:04.805 [2024-07-25 02:36:51.688853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.805 [2024-07-25 02:36:51.688870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32d3a6234780 00:12:04.805 [2024-07-25 02:36:51.688876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.805 [2024-07-25 02:36:51.689229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.805 [2024-07-25 02:36:51.689250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.805 BaseBdev1 00:12:04.805 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:04.805 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.065 BaseBdev2_malloc 00:12:05.065 02:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:05.325 true 00:12:05.325 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.325 [2024-07-25 02:36:52.216809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.325 [2024-07-25 02:36:52.216842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.325 [2024-07-25 02:36:52.216861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32d3a6234c80 00:12:05.325 [2024-07-25 02:36:52.216867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.325 [2024-07-25 02:36:52.217245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.325 [2024-07-25 02:36:52.217271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.325 BaseBdev2 00:12:05.584 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:05.584 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.584 BaseBdev3_malloc 00:12:05.584 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:05.842 true 00:12:05.842 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:06.155 [2024-07-25 02:36:52.756847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:06.155 [2024-07-25 02:36:52.756899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.155 [2024-07-25 02:36:52.756920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32d3a6235180 00:12:06.155 [2024-07-25 02:36:52.756926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.155 [2024-07-25 02:36:52.757355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.155 [2024-07-25 02:36:52.757380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:06.155 BaseBdev3 00:12:06.155 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:06.155 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:06.155 BaseBdev4_malloc 00:12:06.156 02:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:12:06.415 true 00:12:06.415 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:06.415 [2024-07-25 02:36:53.252856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:06.415 [2024-07-25 02:36:53.252891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.415 [2024-07-25 02:36:53.252926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32d3a6235680 00:12:06.415 [2024-07-25 02:36:53.252931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.415 [2024-07-25 02:36:53.253345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.415 [2024-07-25 02:36:53.253375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:06.415 BaseBdev4 00:12:06.415 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:12:06.674 [2024-07-25 02:36:53.432868] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.674 [2024-07-25 02:36:53.433269] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.674 [2024-07-25 02:36:53.433291] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.674 [2024-07-25 02:36:53.433302] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.674 [2024-07-25 02:36:53.433354] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x32d3a6235900 00:12:06.674 [2024-07-25 02:36:53.433359] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:06.674 [2024-07-25 02:36:53.433390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32d3a62a0e20 00:12:06.674 [2024-07-25 02:36:53.433439] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x32d3a6235900 00:12:06.674 [2024-07-25 02:36:53.433442] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x32d3a6235900 00:12:06.674 [2024-07-25 02:36:53.433459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.674 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.933 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.933 "name": "raid_bdev1", 00:12:06.933 "uuid": "c0bcdd3f-4a2e-11ef-9c8e-7947904e2597", 00:12:06.933 "strip_size_kb": 64, 00:12:06.933 "state": "online", 00:12:06.933 "raid_level": "raid0", 00:12:06.933 "superblock": true, 00:12:06.933 "num_base_bdevs": 4, 00:12:06.933 "num_base_bdevs_discovered": 4, 00:12:06.933 "num_base_bdevs_operational": 4, 00:12:06.933 "base_bdevs_list": [ 00:12:06.933 { 00:12:06.933 "name": "BaseBdev1", 00:12:06.933 "uuid": "6ed3425c-03f3-c754-be77-f6884a5208d1", 00:12:06.933 "is_configured": true, 00:12:06.933 "data_offset": 2048, 00:12:06.933 "data_size": 63488 00:12:06.933 }, 00:12:06.933 { 00:12:06.933 "name": "BaseBdev2", 00:12:06.933 "uuid": "602aeed4-bd66-465e-adaa-5bae8055e20d", 00:12:06.933 "is_configured": true, 00:12:06.933 "data_offset": 2048, 00:12:06.933 "data_size": 63488 00:12:06.933 }, 00:12:06.933 { 00:12:06.933 "name": "BaseBdev3", 00:12:06.933 "uuid": "acc1da26-bd1b-e05d-9933-5db7b1d4d2da", 00:12:06.933 "is_configured": true, 00:12:06.933 "data_offset": 2048, 00:12:06.933 "data_size": 63488 00:12:06.933 }, 00:12:06.933 { 00:12:06.933 "name": "BaseBdev4", 00:12:06.933 "uuid": "7b091e4f-a671-4b59-91ae-85257476c40f", 00:12:06.933 "is_configured": true, 00:12:06.933 "data_offset": 2048, 00:12:06.933 "data_size": 63488 00:12:06.933 } 00:12:06.933 ] 00:12:06.933 }' 00:12:06.933 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.933 02:36:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.201 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:07.201 02:36:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:07.201 [2024-07-25 02:36:53.992939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32d3a62a0ec0 00:12:08.161 02:36:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.421 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.681 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.681 "name": "raid_bdev1", 00:12:08.681 "uuid": "c0bcdd3f-4a2e-11ef-9c8e-7947904e2597", 00:12:08.681 "strip_size_kb": 64, 00:12:08.681 "state": "online", 00:12:08.681 "raid_level": "raid0", 00:12:08.681 "superblock": true, 00:12:08.681 "num_base_bdevs": 4, 00:12:08.681 "num_base_bdevs_discovered": 4, 00:12:08.681 "num_base_bdevs_operational": 4, 00:12:08.681 "base_bdevs_list": [ 00:12:08.681 { 00:12:08.681 "name": "BaseBdev1", 00:12:08.681 "uuid": "6ed3425c-03f3-c754-be77-f6884a5208d1", 00:12:08.681 "is_configured": true, 00:12:08.681 "data_offset": 2048, 00:12:08.681 "data_size": 63488 00:12:08.681 }, 00:12:08.681 { 00:12:08.681 "name": "BaseBdev2", 00:12:08.681 "uuid": "602aeed4-bd66-465e-adaa-5bae8055e20d", 00:12:08.681 "is_configured": true, 00:12:08.681 "data_offset": 2048, 00:12:08.681 "data_size": 63488 00:12:08.681 }, 00:12:08.681 { 00:12:08.681 "name": "BaseBdev3", 00:12:08.681 "uuid": "acc1da26-bd1b-e05d-9933-5db7b1d4d2da", 00:12:08.681 "is_configured": true, 00:12:08.681 "data_offset": 2048, 00:12:08.681 "data_size": 63488 00:12:08.681 }, 00:12:08.681 { 00:12:08.681 "name": "BaseBdev4", 00:12:08.681 "uuid": "7b091e4f-a671-4b59-91ae-85257476c40f", 00:12:08.681 "is_configured": true, 00:12:08.681 "data_offset": 2048, 00:12:08.681 "data_size": 63488 00:12:08.681 } 00:12:08.681 ] 00:12:08.681 }' 00:12:08.681 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.681 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.940 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:08.940 [2024-07-25 02:36:55.817293] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.941 [2024-07-25 02:36:55.817321] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.941 [2024-07-25 02:36:55.817599] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.941 [2024-07-25 02:36:55.817607] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.941 [2024-07-25 02:36:55.817615] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.941 [2024-07-25 02:36:55.817619] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32d3a6235900 name raid_bdev1, state offline 00:12:08.941 0 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60165 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60165 ']' 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60165 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60165 00:12:08.941 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:09.200 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:09.200 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:09.200 killing process with pid 60165 00:12:09.200 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60165' 00:12:09.200 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60165 00:12:09.200 [2024-07-25 02:36:55.847611] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.200 02:36:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60165 00:12:09.200 [2024-07-25 02:36:55.866091] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.5cfeo1SBIT 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.55 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.55 != \0\.\0\0 ]] 00:12:09.200 00:12:09.200 real 0m5.806s 00:12:09.200 user 0m8.835s 00:12:09.200 sys 0m1.014s 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.200 02:36:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.200 ************************************ 00:12:09.200 END TEST raid_write_error_test 00:12:09.200 ************************************ 00:12:09.200 02:36:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:09.200 02:36:56 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:09.200 02:36:56 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:09.200 02:36:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:09.200 02:36:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.200 02:36:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.200 ************************************ 00:12:09.200 START TEST raid_state_function_test 00:12:09.200 ************************************ 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.200 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.201 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:09.460 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60297 00:12:09.461 Process raid pid: 60297 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60297' 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60297 /var/tmp/spdk-raid.sock 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60297 ']' 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:09.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:09.461 02:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.461 [2024-07-25 02:36:56.120643] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:12:09.461 [2024-07-25 02:36:56.120978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:09.720 EAL: TSC is not safe to use in SMP mode 00:12:09.720 EAL: TSC is not invariant 00:12:09.720 [2024-07-25 02:36:56.538465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.978 [2024-07-25 02:36:56.629843] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:09.978 [2024-07-25 02:36:56.631499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.978 [2024-07-25 02:36:56.632101] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.978 [2024-07-25 02:36:56.632107] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.237 02:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.237 02:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:10.237 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:10.497 [2024-07-25 02:36:57.187058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.497 [2024-07-25 02:36:57.187094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.497 [2024-07-25 02:36:57.187097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.497 [2024-07-25 02:36:57.187103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.497 [2024-07-25 02:36:57.187106] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.497 [2024-07-25 02:36:57.187111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.497 [2024-07-25 02:36:57.187114] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:10.497 [2024-07-25 02:36:57.187118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.497 "name": "Existed_Raid", 00:12:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.497 "strip_size_kb": 64, 00:12:10.497 "state": "configuring", 00:12:10.497 "raid_level": "concat", 00:12:10.497 "superblock": false, 00:12:10.497 "num_base_bdevs": 4, 00:12:10.497 "num_base_bdevs_discovered": 0, 00:12:10.497 "num_base_bdevs_operational": 4, 00:12:10.497 "base_bdevs_list": [ 00:12:10.497 { 00:12:10.497 "name": "BaseBdev1", 00:12:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.497 "is_configured": false, 00:12:10.497 "data_offset": 0, 00:12:10.497 "data_size": 0 00:12:10.497 }, 00:12:10.497 { 00:12:10.497 "name": "BaseBdev2", 00:12:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.497 "is_configured": false, 00:12:10.497 "data_offset": 0, 00:12:10.497 "data_size": 0 00:12:10.497 }, 00:12:10.497 { 00:12:10.497 "name": "BaseBdev3", 00:12:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.497 "is_configured": false, 00:12:10.497 "data_offset": 0, 00:12:10.497 "data_size": 0 00:12:10.497 }, 00:12:10.497 { 00:12:10.497 "name": "BaseBdev4", 00:12:10.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.497 "is_configured": false, 00:12:10.497 "data_offset": 0, 00:12:10.497 "data_size": 0 00:12:10.497 } 00:12:10.497 ] 00:12:10.497 }' 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.497 02:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.756 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.015 [2024-07-25 02:36:57.803056] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.015 [2024-07-25 02:36:57.803070] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88e13a34500 name Existed_Raid, state configuring 00:12:11.015 02:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:11.274 [2024-07-25 02:36:57.987067] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:11.274 [2024-07-25 02:36:57.987090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:11.274 [2024-07-25 02:36:57.987093] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.274 [2024-07-25 02:36:57.987099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.274 [2024-07-25 02:36:57.987101] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.274 [2024-07-25 02:36:57.987106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.274 [2024-07-25 02:36:57.987109] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.274 [2024-07-25 02:36:57.987113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.274 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.274 [2024-07-25 02:36:58.167838] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.274 BaseBdev1 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:11.533 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.792 [ 00:12:11.792 { 00:12:11.792 "name": "BaseBdev1", 00:12:11.792 "aliases": [ 00:12:11.792 "c38f3f99-4a2e-11ef-9c8e-7947904e2597" 00:12:11.792 ], 00:12:11.792 "product_name": "Malloc disk", 00:12:11.792 "block_size": 512, 00:12:11.792 "num_blocks": 65536, 00:12:11.792 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:11.792 "assigned_rate_limits": { 00:12:11.792 "rw_ios_per_sec": 0, 00:12:11.792 "rw_mbytes_per_sec": 0, 00:12:11.792 "r_mbytes_per_sec": 0, 00:12:11.792 "w_mbytes_per_sec": 0 00:12:11.792 }, 00:12:11.792 "claimed": true, 00:12:11.792 "claim_type": "exclusive_write", 00:12:11.792 "zoned": false, 00:12:11.792 "supported_io_types": { 00:12:11.792 "read": true, 00:12:11.792 "write": true, 00:12:11.792 "unmap": true, 00:12:11.792 "flush": true, 00:12:11.792 "reset": true, 00:12:11.792 "nvme_admin": false, 00:12:11.792 "nvme_io": false, 00:12:11.792 "nvme_io_md": false, 00:12:11.792 "write_zeroes": true, 00:12:11.792 "zcopy": true, 00:12:11.792 "get_zone_info": false, 00:12:11.792 "zone_management": false, 00:12:11.792 "zone_append": false, 00:12:11.792 "compare": false, 00:12:11.792 "compare_and_write": false, 00:12:11.792 "abort": true, 00:12:11.792 "seek_hole": false, 00:12:11.792 "seek_data": false, 00:12:11.792 "copy": true, 00:12:11.792 "nvme_iov_md": false 00:12:11.792 }, 00:12:11.792 "memory_domains": [ 00:12:11.792 { 00:12:11.792 "dma_device_id": "system", 00:12:11.792 "dma_device_type": 1 00:12:11.792 }, 00:12:11.792 { 00:12:11.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.792 "dma_device_type": 2 00:12:11.792 } 00:12:11.792 ], 00:12:11.792 "driver_specific": {} 00:12:11.792 } 00:12:11.792 ] 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:11.792 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.793 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.793 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.793 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.793 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.793 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.052 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.052 "name": "Existed_Raid", 00:12:12.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.052 "strip_size_kb": 64, 00:12:12.052 "state": "configuring", 00:12:12.052 "raid_level": "concat", 00:12:12.052 "superblock": false, 00:12:12.052 "num_base_bdevs": 4, 00:12:12.052 "num_base_bdevs_discovered": 1, 00:12:12.052 "num_base_bdevs_operational": 4, 00:12:12.052 "base_bdevs_list": [ 00:12:12.052 { 00:12:12.052 "name": "BaseBdev1", 00:12:12.052 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:12.052 "is_configured": true, 00:12:12.052 "data_offset": 0, 00:12:12.052 "data_size": 65536 00:12:12.052 }, 00:12:12.052 { 00:12:12.052 "name": "BaseBdev2", 00:12:12.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.052 "is_configured": false, 00:12:12.052 "data_offset": 0, 00:12:12.052 "data_size": 0 00:12:12.052 }, 00:12:12.052 { 00:12:12.052 "name": "BaseBdev3", 00:12:12.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.052 "is_configured": false, 00:12:12.052 "data_offset": 0, 00:12:12.052 "data_size": 0 00:12:12.052 }, 00:12:12.052 { 00:12:12.052 "name": "BaseBdev4", 00:12:12.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.052 "is_configured": false, 00:12:12.052 "data_offset": 0, 00:12:12.052 "data_size": 0 00:12:12.052 } 00:12:12.052 ] 00:12:12.052 }' 00:12:12.052 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.052 02:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.311 02:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:12.311 [2024-07-25 02:36:59.139097] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.311 [2024-07-25 02:36:59.139114] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88e13a34500 name Existed_Raid, state configuring 00:12:12.311 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:12.571 [2024-07-25 02:36:59.319118] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.571 [2024-07-25 02:36:59.319750] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.571 [2024-07-25 02:36:59.319784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.571 [2024-07-25 02:36:59.319788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.571 [2024-07-25 02:36:59.319793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.571 [2024-07-25 02:36:59.319796] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.571 [2024-07-25 02:36:59.319802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.571 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.831 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.831 "name": "Existed_Raid", 00:12:12.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.831 "strip_size_kb": 64, 00:12:12.831 "state": "configuring", 00:12:12.831 "raid_level": "concat", 00:12:12.831 "superblock": false, 00:12:12.831 "num_base_bdevs": 4, 00:12:12.831 "num_base_bdevs_discovered": 1, 00:12:12.831 "num_base_bdevs_operational": 4, 00:12:12.831 "base_bdevs_list": [ 00:12:12.831 { 00:12:12.831 "name": "BaseBdev1", 00:12:12.831 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:12.831 "is_configured": true, 00:12:12.831 "data_offset": 0, 00:12:12.831 "data_size": 65536 00:12:12.831 }, 00:12:12.831 { 00:12:12.831 "name": "BaseBdev2", 00:12:12.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.831 "is_configured": false, 00:12:12.831 "data_offset": 0, 00:12:12.831 "data_size": 0 00:12:12.831 }, 00:12:12.831 { 00:12:12.831 "name": "BaseBdev3", 00:12:12.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.831 "is_configured": false, 00:12:12.831 "data_offset": 0, 00:12:12.831 "data_size": 0 00:12:12.831 }, 00:12:12.831 { 00:12:12.831 "name": "BaseBdev4", 00:12:12.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.831 "is_configured": false, 00:12:12.831 "data_offset": 0, 00:12:12.831 "data_size": 0 00:12:12.831 } 00:12:12.831 ] 00:12:12.831 }' 00:12:12.831 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.831 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.091 [2024-07-25 02:36:59.955245] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.091 BaseBdev2 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:13.091 02:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:13.351 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.611 [ 00:12:13.611 { 00:12:13.611 "name": "BaseBdev2", 00:12:13.611 "aliases": [ 00:12:13.611 "c4a0163e-4a2e-11ef-9c8e-7947904e2597" 00:12:13.611 ], 00:12:13.611 "product_name": "Malloc disk", 00:12:13.611 "block_size": 512, 00:12:13.611 "num_blocks": 65536, 00:12:13.611 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:13.611 "assigned_rate_limits": { 00:12:13.611 "rw_ios_per_sec": 0, 00:12:13.611 "rw_mbytes_per_sec": 0, 00:12:13.611 "r_mbytes_per_sec": 0, 00:12:13.611 "w_mbytes_per_sec": 0 00:12:13.611 }, 00:12:13.611 "claimed": true, 00:12:13.611 "claim_type": "exclusive_write", 00:12:13.611 "zoned": false, 00:12:13.611 "supported_io_types": { 00:12:13.611 "read": true, 00:12:13.611 "write": true, 00:12:13.611 "unmap": true, 00:12:13.611 "flush": true, 00:12:13.611 "reset": true, 00:12:13.611 "nvme_admin": false, 00:12:13.611 "nvme_io": false, 00:12:13.611 "nvme_io_md": false, 00:12:13.611 "write_zeroes": true, 00:12:13.611 "zcopy": true, 00:12:13.611 "get_zone_info": false, 00:12:13.611 "zone_management": false, 00:12:13.611 "zone_append": false, 00:12:13.611 "compare": false, 00:12:13.611 "compare_and_write": false, 00:12:13.611 "abort": true, 00:12:13.611 "seek_hole": false, 00:12:13.611 "seek_data": false, 00:12:13.611 "copy": true, 00:12:13.611 "nvme_iov_md": false 00:12:13.611 }, 00:12:13.611 "memory_domains": [ 00:12:13.611 { 00:12:13.611 "dma_device_id": "system", 00:12:13.611 "dma_device_type": 1 00:12:13.611 }, 00:12:13.611 { 00:12:13.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.611 "dma_device_type": 2 00:12:13.611 } 00:12:13.611 ], 00:12:13.611 "driver_specific": {} 00:12:13.611 } 00:12:13.611 ] 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.611 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.871 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.871 "name": "Existed_Raid", 00:12:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.871 "strip_size_kb": 64, 00:12:13.871 "state": "configuring", 00:12:13.871 "raid_level": "concat", 00:12:13.871 "superblock": false, 00:12:13.871 "num_base_bdevs": 4, 00:12:13.871 "num_base_bdevs_discovered": 2, 00:12:13.871 "num_base_bdevs_operational": 4, 00:12:13.871 "base_bdevs_list": [ 00:12:13.871 { 00:12:13.871 "name": "BaseBdev1", 00:12:13.871 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:13.871 "is_configured": true, 00:12:13.871 "data_offset": 0, 00:12:13.871 "data_size": 65536 00:12:13.871 }, 00:12:13.871 { 00:12:13.871 "name": "BaseBdev2", 00:12:13.871 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:13.871 "is_configured": true, 00:12:13.871 "data_offset": 0, 00:12:13.871 "data_size": 65536 00:12:13.871 }, 00:12:13.871 { 00:12:13.871 "name": "BaseBdev3", 00:12:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.871 "is_configured": false, 00:12:13.871 "data_offset": 0, 00:12:13.871 "data_size": 0 00:12:13.871 }, 00:12:13.871 { 00:12:13.871 "name": "BaseBdev4", 00:12:13.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.871 "is_configured": false, 00:12:13.871 "data_offset": 0, 00:12:13.871 "data_size": 0 00:12:13.871 } 00:12:13.871 ] 00:12:13.871 }' 00:12:13.871 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.871 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.131 [2024-07-25 02:37:00.947260] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.131 BaseBdev3 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:14.131 02:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:14.391 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.651 [ 00:12:14.651 { 00:12:14.651 "name": "BaseBdev3", 00:12:14.651 "aliases": [ 00:12:14.651 "c53775b4-4a2e-11ef-9c8e-7947904e2597" 00:12:14.651 ], 00:12:14.651 "product_name": "Malloc disk", 00:12:14.651 "block_size": 512, 00:12:14.651 "num_blocks": 65536, 00:12:14.651 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:14.651 "assigned_rate_limits": { 00:12:14.651 "rw_ios_per_sec": 0, 00:12:14.651 "rw_mbytes_per_sec": 0, 00:12:14.651 "r_mbytes_per_sec": 0, 00:12:14.651 "w_mbytes_per_sec": 0 00:12:14.651 }, 00:12:14.651 "claimed": true, 00:12:14.651 "claim_type": "exclusive_write", 00:12:14.651 "zoned": false, 00:12:14.651 "supported_io_types": { 00:12:14.651 "read": true, 00:12:14.651 "write": true, 00:12:14.651 "unmap": true, 00:12:14.651 "flush": true, 00:12:14.651 "reset": true, 00:12:14.651 "nvme_admin": false, 00:12:14.651 "nvme_io": false, 00:12:14.651 "nvme_io_md": false, 00:12:14.651 "write_zeroes": true, 00:12:14.651 "zcopy": true, 00:12:14.651 "get_zone_info": false, 00:12:14.651 "zone_management": false, 00:12:14.651 "zone_append": false, 00:12:14.651 "compare": false, 00:12:14.651 "compare_and_write": false, 00:12:14.651 "abort": true, 00:12:14.651 "seek_hole": false, 00:12:14.651 "seek_data": false, 00:12:14.651 "copy": true, 00:12:14.651 "nvme_iov_md": false 00:12:14.651 }, 00:12:14.651 "memory_domains": [ 00:12:14.651 { 00:12:14.651 "dma_device_id": "system", 00:12:14.651 "dma_device_type": 1 00:12:14.651 }, 00:12:14.651 { 00:12:14.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.651 "dma_device_type": 2 00:12:14.651 } 00:12:14.651 ], 00:12:14.651 "driver_specific": {} 00:12:14.651 } 00:12:14.651 ] 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.651 "name": "Existed_Raid", 00:12:14.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.651 "strip_size_kb": 64, 00:12:14.651 "state": "configuring", 00:12:14.651 "raid_level": "concat", 00:12:14.651 "superblock": false, 00:12:14.651 "num_base_bdevs": 4, 00:12:14.651 "num_base_bdevs_discovered": 3, 00:12:14.651 "num_base_bdevs_operational": 4, 00:12:14.651 "base_bdevs_list": [ 00:12:14.651 { 00:12:14.651 "name": "BaseBdev1", 00:12:14.651 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:14.651 "is_configured": true, 00:12:14.651 "data_offset": 0, 00:12:14.651 "data_size": 65536 00:12:14.651 }, 00:12:14.651 { 00:12:14.651 "name": "BaseBdev2", 00:12:14.651 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:14.651 "is_configured": true, 00:12:14.651 "data_offset": 0, 00:12:14.651 "data_size": 65536 00:12:14.651 }, 00:12:14.651 { 00:12:14.651 "name": "BaseBdev3", 00:12:14.651 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:14.651 "is_configured": true, 00:12:14.651 "data_offset": 0, 00:12:14.651 "data_size": 65536 00:12:14.651 }, 00:12:14.651 { 00:12:14.651 "name": "BaseBdev4", 00:12:14.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.651 "is_configured": false, 00:12:14.651 "data_offset": 0, 00:12:14.651 "data_size": 0 00:12:14.651 } 00:12:14.651 ] 00:12:14.651 }' 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.651 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.913 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.173 [2024-07-25 02:37:01.939284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.173 [2024-07-25 02:37:01.939301] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x88e13a34a00 00:12:15.173 [2024-07-25 02:37:01.939304] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:15.173 [2024-07-25 02:37:01.939342] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x88e13a97e20 00:12:15.173 [2024-07-25 02:37:01.939411] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x88e13a34a00 00:12:15.173 [2024-07-25 02:37:01.939414] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x88e13a34a00 00:12:15.173 [2024-07-25 02:37:01.939436] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.173 BaseBdev4 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:15.173 02:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:15.433 02:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.433 [ 00:12:15.433 { 00:12:15.433 "name": "BaseBdev4", 00:12:15.433 "aliases": [ 00:12:15.433 "c5ced4ec-4a2e-11ef-9c8e-7947904e2597" 00:12:15.434 ], 00:12:15.434 "product_name": "Malloc disk", 00:12:15.434 "block_size": 512, 00:12:15.434 "num_blocks": 65536, 00:12:15.434 "uuid": "c5ced4ec-4a2e-11ef-9c8e-7947904e2597", 00:12:15.434 "assigned_rate_limits": { 00:12:15.434 "rw_ios_per_sec": 0, 00:12:15.434 "rw_mbytes_per_sec": 0, 00:12:15.434 "r_mbytes_per_sec": 0, 00:12:15.434 "w_mbytes_per_sec": 0 00:12:15.434 }, 00:12:15.434 "claimed": true, 00:12:15.434 "claim_type": "exclusive_write", 00:12:15.434 "zoned": false, 00:12:15.434 "supported_io_types": { 00:12:15.434 "read": true, 00:12:15.434 "write": true, 00:12:15.434 "unmap": true, 00:12:15.434 "flush": true, 00:12:15.434 "reset": true, 00:12:15.434 "nvme_admin": false, 00:12:15.434 "nvme_io": false, 00:12:15.434 "nvme_io_md": false, 00:12:15.434 "write_zeroes": true, 00:12:15.434 "zcopy": true, 00:12:15.434 "get_zone_info": false, 00:12:15.434 "zone_management": false, 00:12:15.434 "zone_append": false, 00:12:15.434 "compare": false, 00:12:15.434 "compare_and_write": false, 00:12:15.434 "abort": true, 00:12:15.434 "seek_hole": false, 00:12:15.434 "seek_data": false, 00:12:15.434 "copy": true, 00:12:15.434 "nvme_iov_md": false 00:12:15.434 }, 00:12:15.434 "memory_domains": [ 00:12:15.434 { 00:12:15.434 "dma_device_id": "system", 00:12:15.434 "dma_device_type": 1 00:12:15.434 }, 00:12:15.434 { 00:12:15.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.434 "dma_device_type": 2 00:12:15.434 } 00:12:15.434 ], 00:12:15.434 "driver_specific": {} 00:12:15.434 } 00:12:15.434 ] 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.434 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.694 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.694 "name": "Existed_Raid", 00:12:15.694 "uuid": "c5ced866-4a2e-11ef-9c8e-7947904e2597", 00:12:15.694 "strip_size_kb": 64, 00:12:15.694 "state": "online", 00:12:15.694 "raid_level": "concat", 00:12:15.694 "superblock": false, 00:12:15.694 "num_base_bdevs": 4, 00:12:15.694 "num_base_bdevs_discovered": 4, 00:12:15.694 "num_base_bdevs_operational": 4, 00:12:15.694 "base_bdevs_list": [ 00:12:15.694 { 00:12:15.694 "name": "BaseBdev1", 00:12:15.694 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:15.694 "is_configured": true, 00:12:15.694 "data_offset": 0, 00:12:15.694 "data_size": 65536 00:12:15.694 }, 00:12:15.694 { 00:12:15.694 "name": "BaseBdev2", 00:12:15.694 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:15.694 "is_configured": true, 00:12:15.694 "data_offset": 0, 00:12:15.694 "data_size": 65536 00:12:15.694 }, 00:12:15.694 { 00:12:15.694 "name": "BaseBdev3", 00:12:15.694 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:15.694 "is_configured": true, 00:12:15.694 "data_offset": 0, 00:12:15.694 "data_size": 65536 00:12:15.694 }, 00:12:15.694 { 00:12:15.694 "name": "BaseBdev4", 00:12:15.694 "uuid": "c5ced4ec-4a2e-11ef-9c8e-7947904e2597", 00:12:15.694 "is_configured": true, 00:12:15.694 "data_offset": 0, 00:12:15.694 "data_size": 65536 00:12:15.694 } 00:12:15.694 ] 00:12:15.694 }' 00:12:15.694 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.694 02:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:15.953 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:16.213 [2024-07-25 02:37:02.919281] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:16.213 "name": "Existed_Raid", 00:12:16.213 "aliases": [ 00:12:16.213 "c5ced866-4a2e-11ef-9c8e-7947904e2597" 00:12:16.213 ], 00:12:16.213 "product_name": "Raid Volume", 00:12:16.213 "block_size": 512, 00:12:16.213 "num_blocks": 262144, 00:12:16.213 "uuid": "c5ced866-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "assigned_rate_limits": { 00:12:16.213 "rw_ios_per_sec": 0, 00:12:16.213 "rw_mbytes_per_sec": 0, 00:12:16.213 "r_mbytes_per_sec": 0, 00:12:16.213 "w_mbytes_per_sec": 0 00:12:16.213 }, 00:12:16.213 "claimed": false, 00:12:16.213 "zoned": false, 00:12:16.213 "supported_io_types": { 00:12:16.213 "read": true, 00:12:16.213 "write": true, 00:12:16.213 "unmap": true, 00:12:16.213 "flush": true, 00:12:16.213 "reset": true, 00:12:16.213 "nvme_admin": false, 00:12:16.213 "nvme_io": false, 00:12:16.213 "nvme_io_md": false, 00:12:16.213 "write_zeroes": true, 00:12:16.213 "zcopy": false, 00:12:16.213 "get_zone_info": false, 00:12:16.213 "zone_management": false, 00:12:16.213 "zone_append": false, 00:12:16.213 "compare": false, 00:12:16.213 "compare_and_write": false, 00:12:16.213 "abort": false, 00:12:16.213 "seek_hole": false, 00:12:16.213 "seek_data": false, 00:12:16.213 "copy": false, 00:12:16.213 "nvme_iov_md": false 00:12:16.213 }, 00:12:16.213 "memory_domains": [ 00:12:16.213 { 00:12:16.213 "dma_device_id": "system", 00:12:16.213 "dma_device_type": 1 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.213 "dma_device_type": 2 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "system", 00:12:16.213 "dma_device_type": 1 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.213 "dma_device_type": 2 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "system", 00:12:16.213 "dma_device_type": 1 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.213 "dma_device_type": 2 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "system", 00:12:16.213 "dma_device_type": 1 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.213 "dma_device_type": 2 00:12:16.213 } 00:12:16.213 ], 00:12:16.213 "driver_specific": { 00:12:16.213 "raid": { 00:12:16.213 "uuid": "c5ced866-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "strip_size_kb": 64, 00:12:16.213 "state": "online", 00:12:16.213 "raid_level": "concat", 00:12:16.213 "superblock": false, 00:12:16.213 "num_base_bdevs": 4, 00:12:16.213 "num_base_bdevs_discovered": 4, 00:12:16.213 "num_base_bdevs_operational": 4, 00:12:16.213 "base_bdevs_list": [ 00:12:16.213 { 00:12:16.213 "name": "BaseBdev1", 00:12:16.213 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "is_configured": true, 00:12:16.213 "data_offset": 0, 00:12:16.213 "data_size": 65536 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "name": "BaseBdev2", 00:12:16.213 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "is_configured": true, 00:12:16.213 "data_offset": 0, 00:12:16.213 "data_size": 65536 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "name": "BaseBdev3", 00:12:16.213 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "is_configured": true, 00:12:16.213 "data_offset": 0, 00:12:16.213 "data_size": 65536 00:12:16.213 }, 00:12:16.213 { 00:12:16.213 "name": "BaseBdev4", 00:12:16.213 "uuid": "c5ced4ec-4a2e-11ef-9c8e-7947904e2597", 00:12:16.213 "is_configured": true, 00:12:16.213 "data_offset": 0, 00:12:16.213 "data_size": 65536 00:12:16.213 } 00:12:16.213 ] 00:12:16.213 } 00:12:16.213 } 00:12:16.213 }' 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:16.213 BaseBdev2 00:12:16.213 BaseBdev3 00:12:16.213 BaseBdev4' 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.213 02:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.473 "name": "BaseBdev1", 00:12:16.473 "aliases": [ 00:12:16.473 "c38f3f99-4a2e-11ef-9c8e-7947904e2597" 00:12:16.473 ], 00:12:16.473 "product_name": "Malloc disk", 00:12:16.473 "block_size": 512, 00:12:16.473 "num_blocks": 65536, 00:12:16.473 "uuid": "c38f3f99-4a2e-11ef-9c8e-7947904e2597", 00:12:16.473 "assigned_rate_limits": { 00:12:16.473 "rw_ios_per_sec": 0, 00:12:16.473 "rw_mbytes_per_sec": 0, 00:12:16.473 "r_mbytes_per_sec": 0, 00:12:16.473 "w_mbytes_per_sec": 0 00:12:16.473 }, 00:12:16.473 "claimed": true, 00:12:16.473 "claim_type": "exclusive_write", 00:12:16.473 "zoned": false, 00:12:16.473 "supported_io_types": { 00:12:16.473 "read": true, 00:12:16.473 "write": true, 00:12:16.473 "unmap": true, 00:12:16.473 "flush": true, 00:12:16.473 "reset": true, 00:12:16.473 "nvme_admin": false, 00:12:16.473 "nvme_io": false, 00:12:16.473 "nvme_io_md": false, 00:12:16.473 "write_zeroes": true, 00:12:16.473 "zcopy": true, 00:12:16.473 "get_zone_info": false, 00:12:16.473 "zone_management": false, 00:12:16.473 "zone_append": false, 00:12:16.473 "compare": false, 00:12:16.473 "compare_and_write": false, 00:12:16.473 "abort": true, 00:12:16.473 "seek_hole": false, 00:12:16.473 "seek_data": false, 00:12:16.473 "copy": true, 00:12:16.473 "nvme_iov_md": false 00:12:16.473 }, 00:12:16.473 "memory_domains": [ 00:12:16.473 { 00:12:16.473 "dma_device_id": "system", 00:12:16.473 "dma_device_type": 1 00:12:16.473 }, 00:12:16.473 { 00:12:16.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.473 "dma_device_type": 2 00:12:16.473 } 00:12:16.473 ], 00:12:16.473 "driver_specific": {} 00:12:16.473 }' 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:16.473 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.732 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.732 "name": "BaseBdev2", 00:12:16.732 "aliases": [ 00:12:16.732 "c4a0163e-4a2e-11ef-9c8e-7947904e2597" 00:12:16.732 ], 00:12:16.732 "product_name": "Malloc disk", 00:12:16.732 "block_size": 512, 00:12:16.732 "num_blocks": 65536, 00:12:16.732 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:16.732 "assigned_rate_limits": { 00:12:16.732 "rw_ios_per_sec": 0, 00:12:16.732 "rw_mbytes_per_sec": 0, 00:12:16.732 "r_mbytes_per_sec": 0, 00:12:16.732 "w_mbytes_per_sec": 0 00:12:16.732 }, 00:12:16.732 "claimed": true, 00:12:16.732 "claim_type": "exclusive_write", 00:12:16.732 "zoned": false, 00:12:16.732 "supported_io_types": { 00:12:16.732 "read": true, 00:12:16.732 "write": true, 00:12:16.732 "unmap": true, 00:12:16.732 "flush": true, 00:12:16.732 "reset": true, 00:12:16.732 "nvme_admin": false, 00:12:16.732 "nvme_io": false, 00:12:16.732 "nvme_io_md": false, 00:12:16.732 "write_zeroes": true, 00:12:16.732 "zcopy": true, 00:12:16.732 "get_zone_info": false, 00:12:16.732 "zone_management": false, 00:12:16.732 "zone_append": false, 00:12:16.732 "compare": false, 00:12:16.732 "compare_and_write": false, 00:12:16.732 "abort": true, 00:12:16.732 "seek_hole": false, 00:12:16.732 "seek_data": false, 00:12:16.732 "copy": true, 00:12:16.732 "nvme_iov_md": false 00:12:16.732 }, 00:12:16.733 "memory_domains": [ 00:12:16.733 { 00:12:16.733 "dma_device_id": "system", 00:12:16.733 "dma_device_type": 1 00:12:16.733 }, 00:12:16.733 { 00:12:16.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.733 "dma_device_type": 2 00:12:16.733 } 00:12:16.733 ], 00:12:16.733 "driver_specific": {} 00:12:16.733 }' 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:16.733 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.991 "name": "BaseBdev3", 00:12:16.991 "aliases": [ 00:12:16.991 "c53775b4-4a2e-11ef-9c8e-7947904e2597" 00:12:16.991 ], 00:12:16.991 "product_name": "Malloc disk", 00:12:16.991 "block_size": 512, 00:12:16.991 "num_blocks": 65536, 00:12:16.991 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:16.991 "assigned_rate_limits": { 00:12:16.991 "rw_ios_per_sec": 0, 00:12:16.991 "rw_mbytes_per_sec": 0, 00:12:16.991 "r_mbytes_per_sec": 0, 00:12:16.991 "w_mbytes_per_sec": 0 00:12:16.991 }, 00:12:16.991 "claimed": true, 00:12:16.991 "claim_type": "exclusive_write", 00:12:16.991 "zoned": false, 00:12:16.991 "supported_io_types": { 00:12:16.991 "read": true, 00:12:16.991 "write": true, 00:12:16.991 "unmap": true, 00:12:16.991 "flush": true, 00:12:16.991 "reset": true, 00:12:16.991 "nvme_admin": false, 00:12:16.991 "nvme_io": false, 00:12:16.991 "nvme_io_md": false, 00:12:16.991 "write_zeroes": true, 00:12:16.991 "zcopy": true, 00:12:16.991 "get_zone_info": false, 00:12:16.991 "zone_management": false, 00:12:16.991 "zone_append": false, 00:12:16.991 "compare": false, 00:12:16.991 "compare_and_write": false, 00:12:16.991 "abort": true, 00:12:16.991 "seek_hole": false, 00:12:16.991 "seek_data": false, 00:12:16.991 "copy": true, 00:12:16.991 "nvme_iov_md": false 00:12:16.991 }, 00:12:16.991 "memory_domains": [ 00:12:16.991 { 00:12:16.991 "dma_device_id": "system", 00:12:16.991 "dma_device_type": 1 00:12:16.991 }, 00:12:16.991 { 00:12:16.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.991 "dma_device_type": 2 00:12:16.991 } 00:12:16.991 ], 00:12:16.991 "driver_specific": {} 00:12:16.991 }' 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:16.991 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.251 "name": "BaseBdev4", 00:12:17.251 "aliases": [ 00:12:17.251 "c5ced4ec-4a2e-11ef-9c8e-7947904e2597" 00:12:17.251 ], 00:12:17.251 "product_name": "Malloc disk", 00:12:17.251 "block_size": 512, 00:12:17.251 "num_blocks": 65536, 00:12:17.251 "uuid": "c5ced4ec-4a2e-11ef-9c8e-7947904e2597", 00:12:17.251 "assigned_rate_limits": { 00:12:17.251 "rw_ios_per_sec": 0, 00:12:17.251 "rw_mbytes_per_sec": 0, 00:12:17.251 "r_mbytes_per_sec": 0, 00:12:17.251 "w_mbytes_per_sec": 0 00:12:17.251 }, 00:12:17.251 "claimed": true, 00:12:17.251 "claim_type": "exclusive_write", 00:12:17.251 "zoned": false, 00:12:17.251 "supported_io_types": { 00:12:17.251 "read": true, 00:12:17.251 "write": true, 00:12:17.251 "unmap": true, 00:12:17.251 "flush": true, 00:12:17.251 "reset": true, 00:12:17.251 "nvme_admin": false, 00:12:17.251 "nvme_io": false, 00:12:17.251 "nvme_io_md": false, 00:12:17.251 "write_zeroes": true, 00:12:17.251 "zcopy": true, 00:12:17.251 "get_zone_info": false, 00:12:17.251 "zone_management": false, 00:12:17.251 "zone_append": false, 00:12:17.251 "compare": false, 00:12:17.251 "compare_and_write": false, 00:12:17.251 "abort": true, 00:12:17.251 "seek_hole": false, 00:12:17.251 "seek_data": false, 00:12:17.251 "copy": true, 00:12:17.251 "nvme_iov_md": false 00:12:17.251 }, 00:12:17.251 "memory_domains": [ 00:12:17.251 { 00:12:17.251 "dma_device_id": "system", 00:12:17.251 "dma_device_type": 1 00:12:17.251 }, 00:12:17.251 { 00:12:17.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.251 "dma_device_type": 2 00:12:17.251 } 00:12:17.251 ], 00:12:17.251 "driver_specific": {} 00:12:17.251 }' 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.251 02:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.251 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:17.510 [2024-07-25 02:37:04.211332] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.510 [2024-07-25 02:37:04.211347] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.510 [2024-07-25 02:37:04.211356] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.510 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:17.510 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.511 "name": "Existed_Raid", 00:12:17.511 "uuid": "c5ced866-4a2e-11ef-9c8e-7947904e2597", 00:12:17.511 "strip_size_kb": 64, 00:12:17.511 "state": "offline", 00:12:17.511 "raid_level": "concat", 00:12:17.511 "superblock": false, 00:12:17.511 "num_base_bdevs": 4, 00:12:17.511 "num_base_bdevs_discovered": 3, 00:12:17.511 "num_base_bdevs_operational": 3, 00:12:17.511 "base_bdevs_list": [ 00:12:17.511 { 00:12:17.511 "name": null, 00:12:17.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.511 "is_configured": false, 00:12:17.511 "data_offset": 0, 00:12:17.511 "data_size": 65536 00:12:17.511 }, 00:12:17.511 { 00:12:17.511 "name": "BaseBdev2", 00:12:17.511 "uuid": "c4a0163e-4a2e-11ef-9c8e-7947904e2597", 00:12:17.511 "is_configured": true, 00:12:17.511 "data_offset": 0, 00:12:17.511 "data_size": 65536 00:12:17.511 }, 00:12:17.511 { 00:12:17.511 "name": "BaseBdev3", 00:12:17.511 "uuid": "c53775b4-4a2e-11ef-9c8e-7947904e2597", 00:12:17.511 "is_configured": true, 00:12:17.511 "data_offset": 0, 00:12:17.511 "data_size": 65536 00:12:17.511 }, 00:12:17.511 { 00:12:17.511 "name": "BaseBdev4", 00:12:17.511 "uuid": "c5ced4ec-4a2e-11ef-9c8e-7947904e2597", 00:12:17.511 "is_configured": true, 00:12:17.511 "data_offset": 0, 00:12:17.511 "data_size": 65536 00:12:17.511 } 00:12:17.511 ] 00:12:17.511 }' 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.511 02:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.079 02:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:18.339 [2024-07-25 02:37:05.032038] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.339 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:18.599 [2024-07-25 02:37:05.400702] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.599 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.599 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.599 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.599 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:18.858 [2024-07-25 02:37:05.737387] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:18.858 [2024-07-25 02:37:05.737401] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88e13a34a00 name Existed_Raid, state offline 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.858 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:19.118 02:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.378 BaseBdev2 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.378 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.637 [ 00:12:19.637 { 00:12:19.637 "name": "BaseBdev2", 00:12:19.637 "aliases": [ 00:12:19.637 "c8465f67-4a2e-11ef-9c8e-7947904e2597" 00:12:19.637 ], 00:12:19.637 "product_name": "Malloc disk", 00:12:19.637 "block_size": 512, 00:12:19.637 "num_blocks": 65536, 00:12:19.637 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:19.637 "assigned_rate_limits": { 00:12:19.637 "rw_ios_per_sec": 0, 00:12:19.637 "rw_mbytes_per_sec": 0, 00:12:19.637 "r_mbytes_per_sec": 0, 00:12:19.637 "w_mbytes_per_sec": 0 00:12:19.637 }, 00:12:19.637 "claimed": false, 00:12:19.637 "zoned": false, 00:12:19.637 "supported_io_types": { 00:12:19.637 "read": true, 00:12:19.637 "write": true, 00:12:19.637 "unmap": true, 00:12:19.637 "flush": true, 00:12:19.637 "reset": true, 00:12:19.637 "nvme_admin": false, 00:12:19.637 "nvme_io": false, 00:12:19.637 "nvme_io_md": false, 00:12:19.637 "write_zeroes": true, 00:12:19.637 "zcopy": true, 00:12:19.637 "get_zone_info": false, 00:12:19.637 "zone_management": false, 00:12:19.637 "zone_append": false, 00:12:19.637 "compare": false, 00:12:19.637 "compare_and_write": false, 00:12:19.637 "abort": true, 00:12:19.637 "seek_hole": false, 00:12:19.637 "seek_data": false, 00:12:19.637 "copy": true, 00:12:19.638 "nvme_iov_md": false 00:12:19.638 }, 00:12:19.638 "memory_domains": [ 00:12:19.638 { 00:12:19.638 "dma_device_id": "system", 00:12:19.638 "dma_device_type": 1 00:12:19.638 }, 00:12:19.638 { 00:12:19.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.638 "dma_device_type": 2 00:12:19.638 } 00:12:19.638 ], 00:12:19.638 "driver_specific": {} 00:12:19.638 } 00:12:19.638 ] 00:12:19.638 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:19.638 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:19.638 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:19.638 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.897 BaseBdev3 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:19.897 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.157 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.158 [ 00:12:20.158 { 00:12:20.158 "name": "BaseBdev3", 00:12:20.158 "aliases": [ 00:12:20.158 "c899622e-4a2e-11ef-9c8e-7947904e2597" 00:12:20.158 ], 00:12:20.158 "product_name": "Malloc disk", 00:12:20.158 "block_size": 512, 00:12:20.158 "num_blocks": 65536, 00:12:20.158 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:20.158 "assigned_rate_limits": { 00:12:20.158 "rw_ios_per_sec": 0, 00:12:20.158 "rw_mbytes_per_sec": 0, 00:12:20.158 "r_mbytes_per_sec": 0, 00:12:20.158 "w_mbytes_per_sec": 0 00:12:20.158 }, 00:12:20.158 "claimed": false, 00:12:20.158 "zoned": false, 00:12:20.158 "supported_io_types": { 00:12:20.158 "read": true, 00:12:20.158 "write": true, 00:12:20.158 "unmap": true, 00:12:20.158 "flush": true, 00:12:20.158 "reset": true, 00:12:20.158 "nvme_admin": false, 00:12:20.158 "nvme_io": false, 00:12:20.158 "nvme_io_md": false, 00:12:20.158 "write_zeroes": true, 00:12:20.158 "zcopy": true, 00:12:20.158 "get_zone_info": false, 00:12:20.158 "zone_management": false, 00:12:20.158 "zone_append": false, 00:12:20.158 "compare": false, 00:12:20.158 "compare_and_write": false, 00:12:20.158 "abort": true, 00:12:20.158 "seek_hole": false, 00:12:20.158 "seek_data": false, 00:12:20.158 "copy": true, 00:12:20.158 "nvme_iov_md": false 00:12:20.158 }, 00:12:20.158 "memory_domains": [ 00:12:20.158 { 00:12:20.158 "dma_device_id": "system", 00:12:20.158 "dma_device_type": 1 00:12:20.158 }, 00:12:20.158 { 00:12:20.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.158 "dma_device_type": 2 00:12:20.158 } 00:12:20.158 ], 00:12:20.158 "driver_specific": {} 00:12:20.158 } 00:12:20.158 ] 00:12:20.158 02:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:20.158 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:20.158 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:20.158 02:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:20.418 BaseBdev4 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:20.418 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.679 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:20.680 [ 00:12:20.680 { 00:12:20.680 "name": "BaseBdev4", 00:12:20.680 "aliases": [ 00:12:20.680 "c8ec6521-4a2e-11ef-9c8e-7947904e2597" 00:12:20.680 ], 00:12:20.680 "product_name": "Malloc disk", 00:12:20.680 "block_size": 512, 00:12:20.680 "num_blocks": 65536, 00:12:20.680 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:20.680 "assigned_rate_limits": { 00:12:20.680 "rw_ios_per_sec": 0, 00:12:20.680 "rw_mbytes_per_sec": 0, 00:12:20.680 "r_mbytes_per_sec": 0, 00:12:20.680 "w_mbytes_per_sec": 0 00:12:20.680 }, 00:12:20.680 "claimed": false, 00:12:20.680 "zoned": false, 00:12:20.680 "supported_io_types": { 00:12:20.680 "read": true, 00:12:20.680 "write": true, 00:12:20.680 "unmap": true, 00:12:20.680 "flush": true, 00:12:20.680 "reset": true, 00:12:20.680 "nvme_admin": false, 00:12:20.680 "nvme_io": false, 00:12:20.680 "nvme_io_md": false, 00:12:20.680 "write_zeroes": true, 00:12:20.680 "zcopy": true, 00:12:20.680 "get_zone_info": false, 00:12:20.680 "zone_management": false, 00:12:20.680 "zone_append": false, 00:12:20.680 "compare": false, 00:12:20.680 "compare_and_write": false, 00:12:20.680 "abort": true, 00:12:20.680 "seek_hole": false, 00:12:20.680 "seek_data": false, 00:12:20.680 "copy": true, 00:12:20.680 "nvme_iov_md": false 00:12:20.680 }, 00:12:20.680 "memory_domains": [ 00:12:20.680 { 00:12:20.680 "dma_device_id": "system", 00:12:20.680 "dma_device_type": 1 00:12:20.680 }, 00:12:20.680 { 00:12:20.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.680 "dma_device_type": 2 00:12:20.680 } 00:12:20.680 ], 00:12:20.680 "driver_specific": {} 00:12:20.680 } 00:12:20.680 ] 00:12:20.680 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:20.680 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:20.680 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:20.680 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:20.951 [2024-07-25 02:37:07.702155] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.951 [2024-07-25 02:37:07.702200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.951 [2024-07-25 02:37:07.702206] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.951 [2024-07-25 02:37:07.702607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.951 [2024-07-25 02:37:07.702622] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.951 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.218 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.218 "name": "Existed_Raid", 00:12:21.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.218 "strip_size_kb": 64, 00:12:21.218 "state": "configuring", 00:12:21.218 "raid_level": "concat", 00:12:21.218 "superblock": false, 00:12:21.219 "num_base_bdevs": 4, 00:12:21.219 "num_base_bdevs_discovered": 3, 00:12:21.219 "num_base_bdevs_operational": 4, 00:12:21.219 "base_bdevs_list": [ 00:12:21.219 { 00:12:21.219 "name": "BaseBdev1", 00:12:21.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.219 "is_configured": false, 00:12:21.219 "data_offset": 0, 00:12:21.219 "data_size": 0 00:12:21.219 }, 00:12:21.219 { 00:12:21.219 "name": "BaseBdev2", 00:12:21.219 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:21.219 "is_configured": true, 00:12:21.219 "data_offset": 0, 00:12:21.219 "data_size": 65536 00:12:21.219 }, 00:12:21.219 { 00:12:21.219 "name": "BaseBdev3", 00:12:21.219 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:21.219 "is_configured": true, 00:12:21.219 "data_offset": 0, 00:12:21.219 "data_size": 65536 00:12:21.219 }, 00:12:21.219 { 00:12:21.219 "name": "BaseBdev4", 00:12:21.219 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:21.219 "is_configured": true, 00:12:21.219 "data_offset": 0, 00:12:21.219 "data_size": 65536 00:12:21.219 } 00:12:21.219 ] 00:12:21.219 }' 00:12:21.219 02:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.219 02:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:21.479 [2024-07-25 02:37:08.342207] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.479 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.738 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.738 "name": "Existed_Raid", 00:12:21.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.738 "strip_size_kb": 64, 00:12:21.738 "state": "configuring", 00:12:21.738 "raid_level": "concat", 00:12:21.738 "superblock": false, 00:12:21.738 "num_base_bdevs": 4, 00:12:21.738 "num_base_bdevs_discovered": 2, 00:12:21.738 "num_base_bdevs_operational": 4, 00:12:21.738 "base_bdevs_list": [ 00:12:21.738 { 00:12:21.738 "name": "BaseBdev1", 00:12:21.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.738 "is_configured": false, 00:12:21.738 "data_offset": 0, 00:12:21.738 "data_size": 0 00:12:21.738 }, 00:12:21.738 { 00:12:21.738 "name": null, 00:12:21.738 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:21.738 "is_configured": false, 00:12:21.738 "data_offset": 0, 00:12:21.738 "data_size": 65536 00:12:21.738 }, 00:12:21.738 { 00:12:21.738 "name": "BaseBdev3", 00:12:21.738 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:21.738 "is_configured": true, 00:12:21.738 "data_offset": 0, 00:12:21.738 "data_size": 65536 00:12:21.738 }, 00:12:21.738 { 00:12:21.738 "name": "BaseBdev4", 00:12:21.738 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:21.738 "is_configured": true, 00:12:21.738 "data_offset": 0, 00:12:21.738 "data_size": 65536 00:12:21.738 } 00:12:21.738 ] 00:12:21.738 }' 00:12:21.738 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.738 02:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.998 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.998 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.257 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:22.257 02:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.257 [2024-07-25 02:37:09.150338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.257 BaseBdev1 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:22.517 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.777 [ 00:12:22.777 { 00:12:22.777 "name": "BaseBdev1", 00:12:22.777 "aliases": [ 00:12:22.777 "ca1b25dc-4a2e-11ef-9c8e-7947904e2597" 00:12:22.777 ], 00:12:22.777 "product_name": "Malloc disk", 00:12:22.777 "block_size": 512, 00:12:22.777 "num_blocks": 65536, 00:12:22.777 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:22.777 "assigned_rate_limits": { 00:12:22.777 "rw_ios_per_sec": 0, 00:12:22.777 "rw_mbytes_per_sec": 0, 00:12:22.777 "r_mbytes_per_sec": 0, 00:12:22.777 "w_mbytes_per_sec": 0 00:12:22.777 }, 00:12:22.777 "claimed": true, 00:12:22.777 "claim_type": "exclusive_write", 00:12:22.777 "zoned": false, 00:12:22.777 "supported_io_types": { 00:12:22.777 "read": true, 00:12:22.777 "write": true, 00:12:22.777 "unmap": true, 00:12:22.777 "flush": true, 00:12:22.777 "reset": true, 00:12:22.777 "nvme_admin": false, 00:12:22.777 "nvme_io": false, 00:12:22.777 "nvme_io_md": false, 00:12:22.777 "write_zeroes": true, 00:12:22.777 "zcopy": true, 00:12:22.777 "get_zone_info": false, 00:12:22.777 "zone_management": false, 00:12:22.777 "zone_append": false, 00:12:22.777 "compare": false, 00:12:22.777 "compare_and_write": false, 00:12:22.777 "abort": true, 00:12:22.777 "seek_hole": false, 00:12:22.777 "seek_data": false, 00:12:22.777 "copy": true, 00:12:22.777 "nvme_iov_md": false 00:12:22.777 }, 00:12:22.777 "memory_domains": [ 00:12:22.777 { 00:12:22.777 "dma_device_id": "system", 00:12:22.777 "dma_device_type": 1 00:12:22.777 }, 00:12:22.777 { 00:12:22.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.777 "dma_device_type": 2 00:12:22.777 } 00:12:22.777 ], 00:12:22.777 "driver_specific": {} 00:12:22.777 } 00:12:22.777 ] 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.777 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.037 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.037 "name": "Existed_Raid", 00:12:23.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.037 "strip_size_kb": 64, 00:12:23.037 "state": "configuring", 00:12:23.037 "raid_level": "concat", 00:12:23.037 "superblock": false, 00:12:23.037 "num_base_bdevs": 4, 00:12:23.037 "num_base_bdevs_discovered": 3, 00:12:23.037 "num_base_bdevs_operational": 4, 00:12:23.037 "base_bdevs_list": [ 00:12:23.037 { 00:12:23.037 "name": "BaseBdev1", 00:12:23.037 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:23.037 "is_configured": true, 00:12:23.037 "data_offset": 0, 00:12:23.037 "data_size": 65536 00:12:23.037 }, 00:12:23.037 { 00:12:23.037 "name": null, 00:12:23.037 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:23.037 "is_configured": false, 00:12:23.037 "data_offset": 0, 00:12:23.037 "data_size": 65536 00:12:23.037 }, 00:12:23.037 { 00:12:23.037 "name": "BaseBdev3", 00:12:23.037 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:23.037 "is_configured": true, 00:12:23.037 "data_offset": 0, 00:12:23.037 "data_size": 65536 00:12:23.037 }, 00:12:23.037 { 00:12:23.037 "name": "BaseBdev4", 00:12:23.037 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:23.037 "is_configured": true, 00:12:23.037 "data_offset": 0, 00:12:23.038 "data_size": 65536 00:12:23.038 } 00:12:23.038 ] 00:12:23.038 }' 00:12:23.038 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.038 02:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.297 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.297 02:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:23.297 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:23.297 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:23.557 [2024-07-25 02:37:10.314303] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.557 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.558 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.558 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.558 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.558 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.818 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.818 "name": "Existed_Raid", 00:12:23.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.818 "strip_size_kb": 64, 00:12:23.818 "state": "configuring", 00:12:23.818 "raid_level": "concat", 00:12:23.818 "superblock": false, 00:12:23.818 "num_base_bdevs": 4, 00:12:23.818 "num_base_bdevs_discovered": 2, 00:12:23.818 "num_base_bdevs_operational": 4, 00:12:23.818 "base_bdevs_list": [ 00:12:23.818 { 00:12:23.818 "name": "BaseBdev1", 00:12:23.818 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:23.818 "is_configured": true, 00:12:23.818 "data_offset": 0, 00:12:23.818 "data_size": 65536 00:12:23.818 }, 00:12:23.818 { 00:12:23.818 "name": null, 00:12:23.818 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:23.818 "is_configured": false, 00:12:23.818 "data_offset": 0, 00:12:23.818 "data_size": 65536 00:12:23.818 }, 00:12:23.818 { 00:12:23.818 "name": null, 00:12:23.818 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:23.818 "is_configured": false, 00:12:23.818 "data_offset": 0, 00:12:23.818 "data_size": 65536 00:12:23.818 }, 00:12:23.818 { 00:12:23.818 "name": "BaseBdev4", 00:12:23.818 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:23.818 "is_configured": true, 00:12:23.818 "data_offset": 0, 00:12:23.818 "data_size": 65536 00:12:23.818 } 00:12:23.818 ] 00:12:23.818 }' 00:12:23.818 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.818 02:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.078 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.078 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:24.078 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:24.078 02:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:24.338 [2024-07-25 02:37:11.118349] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.338 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.598 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.598 "name": "Existed_Raid", 00:12:24.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.598 "strip_size_kb": 64, 00:12:24.598 "state": "configuring", 00:12:24.598 "raid_level": "concat", 00:12:24.598 "superblock": false, 00:12:24.598 "num_base_bdevs": 4, 00:12:24.598 "num_base_bdevs_discovered": 3, 00:12:24.598 "num_base_bdevs_operational": 4, 00:12:24.598 "base_bdevs_list": [ 00:12:24.598 { 00:12:24.598 "name": "BaseBdev1", 00:12:24.598 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:24.598 "is_configured": true, 00:12:24.598 "data_offset": 0, 00:12:24.598 "data_size": 65536 00:12:24.598 }, 00:12:24.598 { 00:12:24.598 "name": null, 00:12:24.598 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:24.598 "is_configured": false, 00:12:24.598 "data_offset": 0, 00:12:24.598 "data_size": 65536 00:12:24.598 }, 00:12:24.598 { 00:12:24.598 "name": "BaseBdev3", 00:12:24.598 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:24.598 "is_configured": true, 00:12:24.598 "data_offset": 0, 00:12:24.598 "data_size": 65536 00:12:24.598 }, 00:12:24.598 { 00:12:24.598 "name": "BaseBdev4", 00:12:24.598 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:24.598 "is_configured": true, 00:12:24.598 "data_offset": 0, 00:12:24.598 "data_size": 65536 00:12:24.598 } 00:12:24.598 ] 00:12:24.598 }' 00:12:24.598 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.598 02:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.858 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.858 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:25.118 [2024-07-25 02:37:11.942392] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.118 02:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.378 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:25.378 "name": "Existed_Raid", 00:12:25.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.378 "strip_size_kb": 64, 00:12:25.378 "state": "configuring", 00:12:25.378 "raid_level": "concat", 00:12:25.378 "superblock": false, 00:12:25.378 "num_base_bdevs": 4, 00:12:25.378 "num_base_bdevs_discovered": 2, 00:12:25.378 "num_base_bdevs_operational": 4, 00:12:25.378 "base_bdevs_list": [ 00:12:25.378 { 00:12:25.378 "name": null, 00:12:25.378 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:25.378 "is_configured": false, 00:12:25.378 "data_offset": 0, 00:12:25.378 "data_size": 65536 00:12:25.378 }, 00:12:25.378 { 00:12:25.378 "name": null, 00:12:25.378 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:25.378 "is_configured": false, 00:12:25.378 "data_offset": 0, 00:12:25.378 "data_size": 65536 00:12:25.378 }, 00:12:25.378 { 00:12:25.378 "name": "BaseBdev3", 00:12:25.378 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:25.378 "is_configured": true, 00:12:25.378 "data_offset": 0, 00:12:25.378 "data_size": 65536 00:12:25.378 }, 00:12:25.378 { 00:12:25.378 "name": "BaseBdev4", 00:12:25.378 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:25.378 "is_configured": true, 00:12:25.378 "data_offset": 0, 00:12:25.378 "data_size": 65536 00:12:25.378 } 00:12:25.378 ] 00:12:25.378 }' 00:12:25.378 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:25.378 02:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.638 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.638 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:25.897 [2024-07-25 02:37:12.767213] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.897 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.157 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.157 "name": "Existed_Raid", 00:12:26.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.157 "strip_size_kb": 64, 00:12:26.157 "state": "configuring", 00:12:26.157 "raid_level": "concat", 00:12:26.157 "superblock": false, 00:12:26.157 "num_base_bdevs": 4, 00:12:26.157 "num_base_bdevs_discovered": 3, 00:12:26.157 "num_base_bdevs_operational": 4, 00:12:26.157 "base_bdevs_list": [ 00:12:26.157 { 00:12:26.157 "name": null, 00:12:26.157 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:26.157 "is_configured": false, 00:12:26.157 "data_offset": 0, 00:12:26.157 "data_size": 65536 00:12:26.157 }, 00:12:26.157 { 00:12:26.157 "name": "BaseBdev2", 00:12:26.157 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:26.157 "is_configured": true, 00:12:26.157 "data_offset": 0, 00:12:26.157 "data_size": 65536 00:12:26.157 }, 00:12:26.157 { 00:12:26.157 "name": "BaseBdev3", 00:12:26.157 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:26.157 "is_configured": true, 00:12:26.157 "data_offset": 0, 00:12:26.157 "data_size": 65536 00:12:26.157 }, 00:12:26.157 { 00:12:26.157 "name": "BaseBdev4", 00:12:26.157 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:26.157 "is_configured": true, 00:12:26.157 "data_offset": 0, 00:12:26.157 "data_size": 65536 00:12:26.157 } 00:12:26.157 ] 00:12:26.157 }' 00:12:26.157 02:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.157 02:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.417 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.417 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:26.675 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:26.675 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.675 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ca1b25dc-4a2e-11ef-9c8e-7947904e2597 00:12:26.933 [2024-07-25 02:37:13.735392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:26.933 [2024-07-25 02:37:13.735407] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x88e13a34f00 00:12:26.933 [2024-07-25 02:37:13.735410] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:26.933 [2024-07-25 02:37:13.735426] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x88e13a97e20 00:12:26.933 [2024-07-25 02:37:13.735473] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x88e13a34f00 00:12:26.933 [2024-07-25 02:37:13.735476] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x88e13a34f00 00:12:26.933 [2024-07-25 02:37:13.735498] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.933 NewBaseBdev 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.933 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:27.192 02:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:27.451 [ 00:12:27.451 { 00:12:27.451 "name": "NewBaseBdev", 00:12:27.451 "aliases": [ 00:12:27.451 "ca1b25dc-4a2e-11ef-9c8e-7947904e2597" 00:12:27.451 ], 00:12:27.451 "product_name": "Malloc disk", 00:12:27.451 "block_size": 512, 00:12:27.451 "num_blocks": 65536, 00:12:27.451 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "assigned_rate_limits": { 00:12:27.451 "rw_ios_per_sec": 0, 00:12:27.451 "rw_mbytes_per_sec": 0, 00:12:27.451 "r_mbytes_per_sec": 0, 00:12:27.451 "w_mbytes_per_sec": 0 00:12:27.451 }, 00:12:27.451 "claimed": true, 00:12:27.451 "claim_type": "exclusive_write", 00:12:27.451 "zoned": false, 00:12:27.451 "supported_io_types": { 00:12:27.451 "read": true, 00:12:27.451 "write": true, 00:12:27.451 "unmap": true, 00:12:27.451 "flush": true, 00:12:27.451 "reset": true, 00:12:27.451 "nvme_admin": false, 00:12:27.451 "nvme_io": false, 00:12:27.451 "nvme_io_md": false, 00:12:27.451 "write_zeroes": true, 00:12:27.451 "zcopy": true, 00:12:27.451 "get_zone_info": false, 00:12:27.451 "zone_management": false, 00:12:27.451 "zone_append": false, 00:12:27.451 "compare": false, 00:12:27.451 "compare_and_write": false, 00:12:27.451 "abort": true, 00:12:27.451 "seek_hole": false, 00:12:27.451 "seek_data": false, 00:12:27.451 "copy": true, 00:12:27.451 "nvme_iov_md": false 00:12:27.451 }, 00:12:27.451 "memory_domains": [ 00:12:27.451 { 00:12:27.451 "dma_device_id": "system", 00:12:27.451 "dma_device_type": 1 00:12:27.451 }, 00:12:27.451 { 00:12:27.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.451 "dma_device_type": 2 00:12:27.451 } 00:12:27.451 ], 00:12:27.451 "driver_specific": {} 00:12:27.451 } 00:12:27.451 ] 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.451 "name": "Existed_Raid", 00:12:27.451 "uuid": "ccd6c99e-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "strip_size_kb": 64, 00:12:27.451 "state": "online", 00:12:27.451 "raid_level": "concat", 00:12:27.451 "superblock": false, 00:12:27.451 "num_base_bdevs": 4, 00:12:27.451 "num_base_bdevs_discovered": 4, 00:12:27.451 "num_base_bdevs_operational": 4, 00:12:27.451 "base_bdevs_list": [ 00:12:27.451 { 00:12:27.451 "name": "NewBaseBdev", 00:12:27.451 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "is_configured": true, 00:12:27.451 "data_offset": 0, 00:12:27.451 "data_size": 65536 00:12:27.451 }, 00:12:27.451 { 00:12:27.451 "name": "BaseBdev2", 00:12:27.451 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "is_configured": true, 00:12:27.451 "data_offset": 0, 00:12:27.451 "data_size": 65536 00:12:27.451 }, 00:12:27.451 { 00:12:27.451 "name": "BaseBdev3", 00:12:27.451 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "is_configured": true, 00:12:27.451 "data_offset": 0, 00:12:27.451 "data_size": 65536 00:12:27.451 }, 00:12:27.451 { 00:12:27.451 "name": "BaseBdev4", 00:12:27.451 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:27.451 "is_configured": true, 00:12:27.451 "data_offset": 0, 00:12:27.451 "data_size": 65536 00:12:27.451 } 00:12:27.451 ] 00:12:27.451 }' 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.451 02:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:27.709 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:27.969 [2024-07-25 02:37:14.723404] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:27.969 "name": "Existed_Raid", 00:12:27.969 "aliases": [ 00:12:27.969 "ccd6c99e-4a2e-11ef-9c8e-7947904e2597" 00:12:27.969 ], 00:12:27.969 "product_name": "Raid Volume", 00:12:27.969 "block_size": 512, 00:12:27.969 "num_blocks": 262144, 00:12:27.969 "uuid": "ccd6c99e-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "assigned_rate_limits": { 00:12:27.969 "rw_ios_per_sec": 0, 00:12:27.969 "rw_mbytes_per_sec": 0, 00:12:27.969 "r_mbytes_per_sec": 0, 00:12:27.969 "w_mbytes_per_sec": 0 00:12:27.969 }, 00:12:27.969 "claimed": false, 00:12:27.969 "zoned": false, 00:12:27.969 "supported_io_types": { 00:12:27.969 "read": true, 00:12:27.969 "write": true, 00:12:27.969 "unmap": true, 00:12:27.969 "flush": true, 00:12:27.969 "reset": true, 00:12:27.969 "nvme_admin": false, 00:12:27.969 "nvme_io": false, 00:12:27.969 "nvme_io_md": false, 00:12:27.969 "write_zeroes": true, 00:12:27.969 "zcopy": false, 00:12:27.969 "get_zone_info": false, 00:12:27.969 "zone_management": false, 00:12:27.969 "zone_append": false, 00:12:27.969 "compare": false, 00:12:27.969 "compare_and_write": false, 00:12:27.969 "abort": false, 00:12:27.969 "seek_hole": false, 00:12:27.969 "seek_data": false, 00:12:27.969 "copy": false, 00:12:27.969 "nvme_iov_md": false 00:12:27.969 }, 00:12:27.969 "memory_domains": [ 00:12:27.969 { 00:12:27.969 "dma_device_id": "system", 00:12:27.969 "dma_device_type": 1 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.969 "dma_device_type": 2 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "system", 00:12:27.969 "dma_device_type": 1 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.969 "dma_device_type": 2 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "system", 00:12:27.969 "dma_device_type": 1 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.969 "dma_device_type": 2 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "system", 00:12:27.969 "dma_device_type": 1 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.969 "dma_device_type": 2 00:12:27.969 } 00:12:27.969 ], 00:12:27.969 "driver_specific": { 00:12:27.969 "raid": { 00:12:27.969 "uuid": "ccd6c99e-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "strip_size_kb": 64, 00:12:27.969 "state": "online", 00:12:27.969 "raid_level": "concat", 00:12:27.969 "superblock": false, 00:12:27.969 "num_base_bdevs": 4, 00:12:27.969 "num_base_bdevs_discovered": 4, 00:12:27.969 "num_base_bdevs_operational": 4, 00:12:27.969 "base_bdevs_list": [ 00:12:27.969 { 00:12:27.969 "name": "NewBaseBdev", 00:12:27.969 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "is_configured": true, 00:12:27.969 "data_offset": 0, 00:12:27.969 "data_size": 65536 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "name": "BaseBdev2", 00:12:27.969 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "is_configured": true, 00:12:27.969 "data_offset": 0, 00:12:27.969 "data_size": 65536 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "name": "BaseBdev3", 00:12:27.969 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "is_configured": true, 00:12:27.969 "data_offset": 0, 00:12:27.969 "data_size": 65536 00:12:27.969 }, 00:12:27.969 { 00:12:27.969 "name": "BaseBdev4", 00:12:27.969 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:27.969 "is_configured": true, 00:12:27.969 "data_offset": 0, 00:12:27.969 "data_size": 65536 00:12:27.969 } 00:12:27.969 ] 00:12:27.969 } 00:12:27.969 } 00:12:27.969 }' 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:27.969 BaseBdev2 00:12:27.969 BaseBdev3 00:12:27.969 BaseBdev4' 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:27.969 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.229 "name": "NewBaseBdev", 00:12:28.229 "aliases": [ 00:12:28.229 "ca1b25dc-4a2e-11ef-9c8e-7947904e2597" 00:12:28.229 ], 00:12:28.229 "product_name": "Malloc disk", 00:12:28.229 "block_size": 512, 00:12:28.229 "num_blocks": 65536, 00:12:28.229 "uuid": "ca1b25dc-4a2e-11ef-9c8e-7947904e2597", 00:12:28.229 "assigned_rate_limits": { 00:12:28.229 "rw_ios_per_sec": 0, 00:12:28.229 "rw_mbytes_per_sec": 0, 00:12:28.229 "r_mbytes_per_sec": 0, 00:12:28.229 "w_mbytes_per_sec": 0 00:12:28.229 }, 00:12:28.229 "claimed": true, 00:12:28.229 "claim_type": "exclusive_write", 00:12:28.229 "zoned": false, 00:12:28.229 "supported_io_types": { 00:12:28.229 "read": true, 00:12:28.229 "write": true, 00:12:28.229 "unmap": true, 00:12:28.229 "flush": true, 00:12:28.229 "reset": true, 00:12:28.229 "nvme_admin": false, 00:12:28.229 "nvme_io": false, 00:12:28.229 "nvme_io_md": false, 00:12:28.229 "write_zeroes": true, 00:12:28.229 "zcopy": true, 00:12:28.229 "get_zone_info": false, 00:12:28.229 "zone_management": false, 00:12:28.229 "zone_append": false, 00:12:28.229 "compare": false, 00:12:28.229 "compare_and_write": false, 00:12:28.229 "abort": true, 00:12:28.229 "seek_hole": false, 00:12:28.229 "seek_data": false, 00:12:28.229 "copy": true, 00:12:28.229 "nvme_iov_md": false 00:12:28.229 }, 00:12:28.229 "memory_domains": [ 00:12:28.229 { 00:12:28.229 "dma_device_id": "system", 00:12:28.229 "dma_device_type": 1 00:12:28.229 }, 00:12:28.229 { 00:12:28.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.229 "dma_device_type": 2 00:12:28.229 } 00:12:28.229 ], 00:12:28.229 "driver_specific": {} 00:12:28.229 }' 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.229 02:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:28.229 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.488 "name": "BaseBdev2", 00:12:28.488 "aliases": [ 00:12:28.488 "c8465f67-4a2e-11ef-9c8e-7947904e2597" 00:12:28.488 ], 00:12:28.488 "product_name": "Malloc disk", 00:12:28.488 "block_size": 512, 00:12:28.488 "num_blocks": 65536, 00:12:28.488 "uuid": "c8465f67-4a2e-11ef-9c8e-7947904e2597", 00:12:28.488 "assigned_rate_limits": { 00:12:28.488 "rw_ios_per_sec": 0, 00:12:28.488 "rw_mbytes_per_sec": 0, 00:12:28.488 "r_mbytes_per_sec": 0, 00:12:28.488 "w_mbytes_per_sec": 0 00:12:28.488 }, 00:12:28.488 "claimed": true, 00:12:28.488 "claim_type": "exclusive_write", 00:12:28.488 "zoned": false, 00:12:28.488 "supported_io_types": { 00:12:28.488 "read": true, 00:12:28.488 "write": true, 00:12:28.488 "unmap": true, 00:12:28.488 "flush": true, 00:12:28.488 "reset": true, 00:12:28.488 "nvme_admin": false, 00:12:28.488 "nvme_io": false, 00:12:28.488 "nvme_io_md": false, 00:12:28.488 "write_zeroes": true, 00:12:28.488 "zcopy": true, 00:12:28.488 "get_zone_info": false, 00:12:28.488 "zone_management": false, 00:12:28.488 "zone_append": false, 00:12:28.488 "compare": false, 00:12:28.488 "compare_and_write": false, 00:12:28.488 "abort": true, 00:12:28.488 "seek_hole": false, 00:12:28.488 "seek_data": false, 00:12:28.488 "copy": true, 00:12:28.488 "nvme_iov_md": false 00:12:28.488 }, 00:12:28.488 "memory_domains": [ 00:12:28.488 { 00:12:28.488 "dma_device_id": "system", 00:12:28.488 "dma_device_type": 1 00:12:28.488 }, 00:12:28.488 { 00:12:28.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.488 "dma_device_type": 2 00:12:28.488 } 00:12:28.488 ], 00:12:28.488 "driver_specific": {} 00:12:28.488 }' 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.488 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.489 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.489 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.489 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:28.489 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.748 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.748 "name": "BaseBdev3", 00:12:28.748 "aliases": [ 00:12:28.748 "c899622e-4a2e-11ef-9c8e-7947904e2597" 00:12:28.748 ], 00:12:28.748 "product_name": "Malloc disk", 00:12:28.748 "block_size": 512, 00:12:28.748 "num_blocks": 65536, 00:12:28.748 "uuid": "c899622e-4a2e-11ef-9c8e-7947904e2597", 00:12:28.748 "assigned_rate_limits": { 00:12:28.748 "rw_ios_per_sec": 0, 00:12:28.748 "rw_mbytes_per_sec": 0, 00:12:28.748 "r_mbytes_per_sec": 0, 00:12:28.748 "w_mbytes_per_sec": 0 00:12:28.749 }, 00:12:28.749 "claimed": true, 00:12:28.749 "claim_type": "exclusive_write", 00:12:28.749 "zoned": false, 00:12:28.749 "supported_io_types": { 00:12:28.749 "read": true, 00:12:28.749 "write": true, 00:12:28.749 "unmap": true, 00:12:28.749 "flush": true, 00:12:28.749 "reset": true, 00:12:28.749 "nvme_admin": false, 00:12:28.749 "nvme_io": false, 00:12:28.749 "nvme_io_md": false, 00:12:28.749 "write_zeroes": true, 00:12:28.749 "zcopy": true, 00:12:28.749 "get_zone_info": false, 00:12:28.749 "zone_management": false, 00:12:28.749 "zone_append": false, 00:12:28.749 "compare": false, 00:12:28.749 "compare_and_write": false, 00:12:28.749 "abort": true, 00:12:28.749 "seek_hole": false, 00:12:28.749 "seek_data": false, 00:12:28.749 "copy": true, 00:12:28.749 "nvme_iov_md": false 00:12:28.749 }, 00:12:28.749 "memory_domains": [ 00:12:28.749 { 00:12:28.749 "dma_device_id": "system", 00:12:28.749 "dma_device_type": 1 00:12:28.749 }, 00:12:28.749 { 00:12:28.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.749 "dma_device_type": 2 00:12:28.749 } 00:12:28.749 ], 00:12:28.749 "driver_specific": {} 00:12:28.749 }' 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:28.749 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:29.009 "name": "BaseBdev4", 00:12:29.009 "aliases": [ 00:12:29.009 "c8ec6521-4a2e-11ef-9c8e-7947904e2597" 00:12:29.009 ], 00:12:29.009 "product_name": "Malloc disk", 00:12:29.009 "block_size": 512, 00:12:29.009 "num_blocks": 65536, 00:12:29.009 "uuid": "c8ec6521-4a2e-11ef-9c8e-7947904e2597", 00:12:29.009 "assigned_rate_limits": { 00:12:29.009 "rw_ios_per_sec": 0, 00:12:29.009 "rw_mbytes_per_sec": 0, 00:12:29.009 "r_mbytes_per_sec": 0, 00:12:29.009 "w_mbytes_per_sec": 0 00:12:29.009 }, 00:12:29.009 "claimed": true, 00:12:29.009 "claim_type": "exclusive_write", 00:12:29.009 "zoned": false, 00:12:29.009 "supported_io_types": { 00:12:29.009 "read": true, 00:12:29.009 "write": true, 00:12:29.009 "unmap": true, 00:12:29.009 "flush": true, 00:12:29.009 "reset": true, 00:12:29.009 "nvme_admin": false, 00:12:29.009 "nvme_io": false, 00:12:29.009 "nvme_io_md": false, 00:12:29.009 "write_zeroes": true, 00:12:29.009 "zcopy": true, 00:12:29.009 "get_zone_info": false, 00:12:29.009 "zone_management": false, 00:12:29.009 "zone_append": false, 00:12:29.009 "compare": false, 00:12:29.009 "compare_and_write": false, 00:12:29.009 "abort": true, 00:12:29.009 "seek_hole": false, 00:12:29.009 "seek_data": false, 00:12:29.009 "copy": true, 00:12:29.009 "nvme_iov_md": false 00:12:29.009 }, 00:12:29.009 "memory_domains": [ 00:12:29.009 { 00:12:29.009 "dma_device_id": "system", 00:12:29.009 "dma_device_type": 1 00:12:29.009 }, 00:12:29.009 { 00:12:29.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.009 "dma_device_type": 2 00:12:29.009 } 00:12:29.009 ], 00:12:29.009 "driver_specific": {} 00:12:29.009 }' 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:29.009 02:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:29.269 [2024-07-25 02:37:16.011452] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.269 [2024-07-25 02:37:16.011468] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.269 [2024-07-25 02:37:16.011481] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.269 [2024-07-25 02:37:16.011491] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.269 [2024-07-25 02:37:16.011494] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x88e13a34f00 name Existed_Raid, state offline 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60297 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60297 ']' 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60297 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60297 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:29.269 killing process with pid 60297 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60297' 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60297 00:12:29.269 [2024-07-25 02:37:16.040042] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.269 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60297 00:12:29.269 [2024-07-25 02:37:16.058690] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:29.529 00:12:29.529 real 0m20.124s 00:12:29.529 user 0m36.090s 00:12:29.529 sys 0m3.510s 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.529 ************************************ 00:12:29.529 END TEST raid_state_function_test 00:12:29.529 ************************************ 00:12:29.529 02:37:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:29.529 02:37:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:29.529 02:37:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:29.529 02:37:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.529 02:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.529 ************************************ 00:12:29.529 START TEST raid_state_function_test_sb 00:12:29.529 ************************************ 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.529 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61088 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61088' 00:12:29.530 Process raid pid: 61088 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61088 /var/tmp/spdk-raid.sock 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61088 ']' 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.530 02:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.530 [2024-07-25 02:37:16.312954] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:12:29.530 [2024-07-25 02:37:16.313193] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:30.105 EAL: TSC is not safe to use in SMP mode 00:12:30.105 EAL: TSC is not invariant 00:12:30.105 [2024-07-25 02:37:16.732727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.105 [2024-07-25 02:37:16.825320] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:30.105 [2024-07-25 02:37:16.827072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.105 [2024-07-25 02:37:16.827658] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.105 [2024-07-25 02:37:16.827669] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.364 02:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.364 02:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:30.364 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:30.623 [2024-07-25 02:37:17.318603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.623 [2024-07-25 02:37:17.318645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.623 [2024-07-25 02:37:17.318649] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.623 [2024-07-25 02:37:17.318655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.623 [2024-07-25 02:37:17.318657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.623 [2024-07-25 02:37:17.318663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.623 [2024-07-25 02:37:17.318665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.623 [2024-07-25 02:37:17.318686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.623 "name": "Existed_Raid", 00:12:30.623 "uuid": "cef989bf-4a2e-11ef-9c8e-7947904e2597", 00:12:30.623 "strip_size_kb": 64, 00:12:30.623 "state": "configuring", 00:12:30.623 "raid_level": "concat", 00:12:30.623 "superblock": true, 00:12:30.623 "num_base_bdevs": 4, 00:12:30.623 "num_base_bdevs_discovered": 0, 00:12:30.623 "num_base_bdevs_operational": 4, 00:12:30.623 "base_bdevs_list": [ 00:12:30.623 { 00:12:30.623 "name": "BaseBdev1", 00:12:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.623 "is_configured": false, 00:12:30.623 "data_offset": 0, 00:12:30.623 "data_size": 0 00:12:30.623 }, 00:12:30.623 { 00:12:30.623 "name": "BaseBdev2", 00:12:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.623 "is_configured": false, 00:12:30.623 "data_offset": 0, 00:12:30.623 "data_size": 0 00:12:30.623 }, 00:12:30.623 { 00:12:30.623 "name": "BaseBdev3", 00:12:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.623 "is_configured": false, 00:12:30.623 "data_offset": 0, 00:12:30.623 "data_size": 0 00:12:30.623 }, 00:12:30.623 { 00:12:30.623 "name": "BaseBdev4", 00:12:30.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.623 "is_configured": false, 00:12:30.623 "data_offset": 0, 00:12:30.623 "data_size": 0 00:12:30.623 } 00:12:30.623 ] 00:12:30.623 }' 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.623 02:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.882 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.142 [2024-07-25 02:37:17.946610] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.142 [2024-07-25 02:37:17.946624] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b1f16a34500 name Existed_Raid, state configuring 00:12:31.142 02:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:31.402 [2024-07-25 02:37:18.130627] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.402 [2024-07-25 02:37:18.130656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.402 [2024-07-25 02:37:18.130659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.402 [2024-07-25 02:37:18.130664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.402 [2024-07-25 02:37:18.130666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.402 [2024-07-25 02:37:18.130671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.402 [2024-07-25 02:37:18.130673] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.402 [2024-07-25 02:37:18.130678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:31.402 [2024-07-25 02:37:18.287380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.402 BaseBdev1 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:31.402 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:31.662 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.922 [ 00:12:31.922 { 00:12:31.922 "name": "BaseBdev1", 00:12:31.922 "aliases": [ 00:12:31.922 "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597" 00:12:31.922 ], 00:12:31.922 "product_name": "Malloc disk", 00:12:31.922 "block_size": 512, 00:12:31.922 "num_blocks": 65536, 00:12:31.922 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:31.922 "assigned_rate_limits": { 00:12:31.922 "rw_ios_per_sec": 0, 00:12:31.922 "rw_mbytes_per_sec": 0, 00:12:31.922 "r_mbytes_per_sec": 0, 00:12:31.922 "w_mbytes_per_sec": 0 00:12:31.922 }, 00:12:31.922 "claimed": true, 00:12:31.922 "claim_type": "exclusive_write", 00:12:31.922 "zoned": false, 00:12:31.922 "supported_io_types": { 00:12:31.922 "read": true, 00:12:31.922 "write": true, 00:12:31.922 "unmap": true, 00:12:31.922 "flush": true, 00:12:31.922 "reset": true, 00:12:31.922 "nvme_admin": false, 00:12:31.922 "nvme_io": false, 00:12:31.922 "nvme_io_md": false, 00:12:31.922 "write_zeroes": true, 00:12:31.922 "zcopy": true, 00:12:31.922 "get_zone_info": false, 00:12:31.922 "zone_management": false, 00:12:31.922 "zone_append": false, 00:12:31.922 "compare": false, 00:12:31.922 "compare_and_write": false, 00:12:31.922 "abort": true, 00:12:31.922 "seek_hole": false, 00:12:31.922 "seek_data": false, 00:12:31.922 "copy": true, 00:12:31.922 "nvme_iov_md": false 00:12:31.922 }, 00:12:31.922 "memory_domains": [ 00:12:31.922 { 00:12:31.922 "dma_device_id": "system", 00:12:31.922 "dma_device_type": 1 00:12:31.922 }, 00:12:31.922 { 00:12:31.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.922 "dma_device_type": 2 00:12:31.922 } 00:12:31.922 ], 00:12:31.922 "driver_specific": {} 00:12:31.922 } 00:12:31.922 ] 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.922 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.182 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.182 "name": "Existed_Raid", 00:12:32.182 "uuid": "cf757194-4a2e-11ef-9c8e-7947904e2597", 00:12:32.182 "strip_size_kb": 64, 00:12:32.182 "state": "configuring", 00:12:32.182 "raid_level": "concat", 00:12:32.182 "superblock": true, 00:12:32.182 "num_base_bdevs": 4, 00:12:32.182 "num_base_bdevs_discovered": 1, 00:12:32.182 "num_base_bdevs_operational": 4, 00:12:32.182 "base_bdevs_list": [ 00:12:32.182 { 00:12:32.182 "name": "BaseBdev1", 00:12:32.182 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:32.182 "is_configured": true, 00:12:32.182 "data_offset": 2048, 00:12:32.182 "data_size": 63488 00:12:32.182 }, 00:12:32.182 { 00:12:32.182 "name": "BaseBdev2", 00:12:32.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.182 "is_configured": false, 00:12:32.182 "data_offset": 0, 00:12:32.182 "data_size": 0 00:12:32.182 }, 00:12:32.182 { 00:12:32.182 "name": "BaseBdev3", 00:12:32.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.182 "is_configured": false, 00:12:32.182 "data_offset": 0, 00:12:32.182 "data_size": 0 00:12:32.182 }, 00:12:32.182 { 00:12:32.182 "name": "BaseBdev4", 00:12:32.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.182 "is_configured": false, 00:12:32.182 "data_offset": 0, 00:12:32.182 "data_size": 0 00:12:32.182 } 00:12:32.182 ] 00:12:32.182 }' 00:12:32.182 02:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.182 02:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.442 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:32.442 [2024-07-25 02:37:19.278684] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:32.442 [2024-07-25 02:37:19.278701] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b1f16a34500 name Existed_Raid, state configuring 00:12:32.442 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:32.702 [2024-07-25 02:37:19.458707] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.702 [2024-07-25 02:37:19.459300] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:32.702 [2024-07-25 02:37:19.459333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:32.702 [2024-07-25 02:37:19.459337] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:32.702 [2024-07-25 02:37:19.459342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:32.702 [2024-07-25 02:37:19.459345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:32.702 [2024-07-25 02:37:19.459350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.702 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.962 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.962 "name": "Existed_Raid", 00:12:32.962 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:32.962 "strip_size_kb": 64, 00:12:32.962 "state": "configuring", 00:12:32.962 "raid_level": "concat", 00:12:32.962 "superblock": true, 00:12:32.962 "num_base_bdevs": 4, 00:12:32.962 "num_base_bdevs_discovered": 1, 00:12:32.962 "num_base_bdevs_operational": 4, 00:12:32.962 "base_bdevs_list": [ 00:12:32.962 { 00:12:32.962 "name": "BaseBdev1", 00:12:32.962 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:32.962 "is_configured": true, 00:12:32.962 "data_offset": 2048, 00:12:32.962 "data_size": 63488 00:12:32.962 }, 00:12:32.962 { 00:12:32.962 "name": "BaseBdev2", 00:12:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.962 "is_configured": false, 00:12:32.962 "data_offset": 0, 00:12:32.962 "data_size": 0 00:12:32.962 }, 00:12:32.962 { 00:12:32.962 "name": "BaseBdev3", 00:12:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.962 "is_configured": false, 00:12:32.962 "data_offset": 0, 00:12:32.962 "data_size": 0 00:12:32.962 }, 00:12:32.962 { 00:12:32.962 "name": "BaseBdev4", 00:12:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.962 "is_configured": false, 00:12:32.962 "data_offset": 0, 00:12:32.962 "data_size": 0 00:12:32.962 } 00:12:32.962 ] 00:12:32.962 }' 00:12:32.962 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.962 02:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.222 02:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:33.222 [2024-07-25 02:37:20.090841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.222 BaseBdev2 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:33.222 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:33.482 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:33.742 [ 00:12:33.742 { 00:12:33.742 "name": "BaseBdev2", 00:12:33.742 "aliases": [ 00:12:33.742 "d0a088c8-4a2e-11ef-9c8e-7947904e2597" 00:12:33.742 ], 00:12:33.742 "product_name": "Malloc disk", 00:12:33.742 "block_size": 512, 00:12:33.742 "num_blocks": 65536, 00:12:33.742 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:33.742 "assigned_rate_limits": { 00:12:33.742 "rw_ios_per_sec": 0, 00:12:33.742 "rw_mbytes_per_sec": 0, 00:12:33.742 "r_mbytes_per_sec": 0, 00:12:33.742 "w_mbytes_per_sec": 0 00:12:33.742 }, 00:12:33.742 "claimed": true, 00:12:33.742 "claim_type": "exclusive_write", 00:12:33.742 "zoned": false, 00:12:33.742 "supported_io_types": { 00:12:33.742 "read": true, 00:12:33.742 "write": true, 00:12:33.742 "unmap": true, 00:12:33.742 "flush": true, 00:12:33.742 "reset": true, 00:12:33.742 "nvme_admin": false, 00:12:33.742 "nvme_io": false, 00:12:33.742 "nvme_io_md": false, 00:12:33.742 "write_zeroes": true, 00:12:33.742 "zcopy": true, 00:12:33.742 "get_zone_info": false, 00:12:33.742 "zone_management": false, 00:12:33.742 "zone_append": false, 00:12:33.742 "compare": false, 00:12:33.742 "compare_and_write": false, 00:12:33.742 "abort": true, 00:12:33.742 "seek_hole": false, 00:12:33.742 "seek_data": false, 00:12:33.742 "copy": true, 00:12:33.742 "nvme_iov_md": false 00:12:33.742 }, 00:12:33.742 "memory_domains": [ 00:12:33.742 { 00:12:33.742 "dma_device_id": "system", 00:12:33.742 "dma_device_type": 1 00:12:33.742 }, 00:12:33.742 { 00:12:33.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.742 "dma_device_type": 2 00:12:33.742 } 00:12:33.742 ], 00:12:33.742 "driver_specific": {} 00:12:33.742 } 00:12:33.742 ] 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.742 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.742 "name": "Existed_Raid", 00:12:33.742 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:33.742 "strip_size_kb": 64, 00:12:33.742 "state": "configuring", 00:12:33.742 "raid_level": "concat", 00:12:33.742 "superblock": true, 00:12:33.742 "num_base_bdevs": 4, 00:12:33.742 "num_base_bdevs_discovered": 2, 00:12:33.742 "num_base_bdevs_operational": 4, 00:12:33.742 "base_bdevs_list": [ 00:12:33.742 { 00:12:33.742 "name": "BaseBdev1", 00:12:33.742 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:33.742 "is_configured": true, 00:12:33.742 "data_offset": 2048, 00:12:33.742 "data_size": 63488 00:12:33.742 }, 00:12:33.742 { 00:12:33.742 "name": "BaseBdev2", 00:12:33.742 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:33.742 "is_configured": true, 00:12:33.742 "data_offset": 2048, 00:12:33.742 "data_size": 63488 00:12:33.742 }, 00:12:33.743 { 00:12:33.743 "name": "BaseBdev3", 00:12:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.743 "is_configured": false, 00:12:33.743 "data_offset": 0, 00:12:33.743 "data_size": 0 00:12:33.743 }, 00:12:33.743 { 00:12:33.743 "name": "BaseBdev4", 00:12:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.743 "is_configured": false, 00:12:33.743 "data_offset": 0, 00:12:33.743 "data_size": 0 00:12:33.743 } 00:12:33.743 ] 00:12:33.743 }' 00:12:33.743 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.743 02:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.003 02:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.261 [2024-07-25 02:37:21.042877] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.262 BaseBdev3 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:34.262 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:34.520 [ 00:12:34.520 { 00:12:34.520 "name": "BaseBdev3", 00:12:34.520 "aliases": [ 00:12:34.520 "d131cec6-4a2e-11ef-9c8e-7947904e2597" 00:12:34.520 ], 00:12:34.520 "product_name": "Malloc disk", 00:12:34.520 "block_size": 512, 00:12:34.520 "num_blocks": 65536, 00:12:34.520 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:34.520 "assigned_rate_limits": { 00:12:34.520 "rw_ios_per_sec": 0, 00:12:34.520 "rw_mbytes_per_sec": 0, 00:12:34.520 "r_mbytes_per_sec": 0, 00:12:34.520 "w_mbytes_per_sec": 0 00:12:34.520 }, 00:12:34.520 "claimed": true, 00:12:34.520 "claim_type": "exclusive_write", 00:12:34.520 "zoned": false, 00:12:34.520 "supported_io_types": { 00:12:34.520 "read": true, 00:12:34.520 "write": true, 00:12:34.520 "unmap": true, 00:12:34.520 "flush": true, 00:12:34.520 "reset": true, 00:12:34.520 "nvme_admin": false, 00:12:34.520 "nvme_io": false, 00:12:34.520 "nvme_io_md": false, 00:12:34.520 "write_zeroes": true, 00:12:34.520 "zcopy": true, 00:12:34.520 "get_zone_info": false, 00:12:34.520 "zone_management": false, 00:12:34.520 "zone_append": false, 00:12:34.520 "compare": false, 00:12:34.520 "compare_and_write": false, 00:12:34.520 "abort": true, 00:12:34.520 "seek_hole": false, 00:12:34.520 "seek_data": false, 00:12:34.520 "copy": true, 00:12:34.520 "nvme_iov_md": false 00:12:34.520 }, 00:12:34.520 "memory_domains": [ 00:12:34.520 { 00:12:34.520 "dma_device_id": "system", 00:12:34.520 "dma_device_type": 1 00:12:34.520 }, 00:12:34.520 { 00:12:34.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.520 "dma_device_type": 2 00:12:34.520 } 00:12:34.520 ], 00:12:34.520 "driver_specific": {} 00:12:34.520 } 00:12:34.520 ] 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.520 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.780 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.780 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.780 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.780 "name": "Existed_Raid", 00:12:34.780 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:34.780 "strip_size_kb": 64, 00:12:34.780 "state": "configuring", 00:12:34.780 "raid_level": "concat", 00:12:34.780 "superblock": true, 00:12:34.780 "num_base_bdevs": 4, 00:12:34.780 "num_base_bdevs_discovered": 3, 00:12:34.780 "num_base_bdevs_operational": 4, 00:12:34.780 "base_bdevs_list": [ 00:12:34.780 { 00:12:34.780 "name": "BaseBdev1", 00:12:34.780 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:34.780 "is_configured": true, 00:12:34.780 "data_offset": 2048, 00:12:34.780 "data_size": 63488 00:12:34.780 }, 00:12:34.780 { 00:12:34.780 "name": "BaseBdev2", 00:12:34.780 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:34.780 "is_configured": true, 00:12:34.780 "data_offset": 2048, 00:12:34.780 "data_size": 63488 00:12:34.780 }, 00:12:34.780 { 00:12:34.780 "name": "BaseBdev3", 00:12:34.780 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:34.780 "is_configured": true, 00:12:34.780 "data_offset": 2048, 00:12:34.780 "data_size": 63488 00:12:34.780 }, 00:12:34.780 { 00:12:34.780 "name": "BaseBdev4", 00:12:34.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.780 "is_configured": false, 00:12:34.780 "data_offset": 0, 00:12:34.780 "data_size": 0 00:12:34.780 } 00:12:34.780 ] 00:12:34.780 }' 00:12:34.780 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.780 02:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.040 02:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:35.300 [2024-07-25 02:37:22.022958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.300 [2024-07-25 02:37:22.023003] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b1f16a34a00 00:12:35.300 [2024-07-25 02:37:22.023007] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:35.300 [2024-07-25 02:37:22.023022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b1f16a97e20 00:12:35.300 [2024-07-25 02:37:22.023056] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b1f16a34a00 00:12:35.300 [2024-07-25 02:37:22.023059] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3b1f16a34a00 00:12:35.300 [2024-07-25 02:37:22.023073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.300 BaseBdev4 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.300 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.564 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:35.564 [ 00:12:35.564 { 00:12:35.564 "name": "BaseBdev4", 00:12:35.564 "aliases": [ 00:12:35.564 "d1c75b78-4a2e-11ef-9c8e-7947904e2597" 00:12:35.564 ], 00:12:35.564 "product_name": "Malloc disk", 00:12:35.564 "block_size": 512, 00:12:35.564 "num_blocks": 65536, 00:12:35.565 "uuid": "d1c75b78-4a2e-11ef-9c8e-7947904e2597", 00:12:35.565 "assigned_rate_limits": { 00:12:35.565 "rw_ios_per_sec": 0, 00:12:35.565 "rw_mbytes_per_sec": 0, 00:12:35.565 "r_mbytes_per_sec": 0, 00:12:35.565 "w_mbytes_per_sec": 0 00:12:35.565 }, 00:12:35.565 "claimed": true, 00:12:35.565 "claim_type": "exclusive_write", 00:12:35.565 "zoned": false, 00:12:35.565 "supported_io_types": { 00:12:35.565 "read": true, 00:12:35.565 "write": true, 00:12:35.565 "unmap": true, 00:12:35.565 "flush": true, 00:12:35.565 "reset": true, 00:12:35.565 "nvme_admin": false, 00:12:35.565 "nvme_io": false, 00:12:35.565 "nvme_io_md": false, 00:12:35.565 "write_zeroes": true, 00:12:35.565 "zcopy": true, 00:12:35.565 "get_zone_info": false, 00:12:35.565 "zone_management": false, 00:12:35.565 "zone_append": false, 00:12:35.565 "compare": false, 00:12:35.565 "compare_and_write": false, 00:12:35.565 "abort": true, 00:12:35.565 "seek_hole": false, 00:12:35.565 "seek_data": false, 00:12:35.565 "copy": true, 00:12:35.565 "nvme_iov_md": false 00:12:35.565 }, 00:12:35.565 "memory_domains": [ 00:12:35.565 { 00:12:35.565 "dma_device_id": "system", 00:12:35.565 "dma_device_type": 1 00:12:35.565 }, 00:12:35.565 { 00:12:35.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.565 "dma_device_type": 2 00:12:35.565 } 00:12:35.565 ], 00:12:35.565 "driver_specific": {} 00:12:35.565 } 00:12:35.565 ] 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.565 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.847 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.847 "name": "Existed_Raid", 00:12:35.847 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:35.847 "strip_size_kb": 64, 00:12:35.847 "state": "online", 00:12:35.847 "raid_level": "concat", 00:12:35.847 "superblock": true, 00:12:35.847 "num_base_bdevs": 4, 00:12:35.847 "num_base_bdevs_discovered": 4, 00:12:35.847 "num_base_bdevs_operational": 4, 00:12:35.847 "base_bdevs_list": [ 00:12:35.847 { 00:12:35.847 "name": "BaseBdev1", 00:12:35.847 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:35.847 "is_configured": true, 00:12:35.847 "data_offset": 2048, 00:12:35.847 "data_size": 63488 00:12:35.847 }, 00:12:35.847 { 00:12:35.847 "name": "BaseBdev2", 00:12:35.847 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:35.847 "is_configured": true, 00:12:35.847 "data_offset": 2048, 00:12:35.847 "data_size": 63488 00:12:35.847 }, 00:12:35.847 { 00:12:35.847 "name": "BaseBdev3", 00:12:35.847 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:35.847 "is_configured": true, 00:12:35.847 "data_offset": 2048, 00:12:35.847 "data_size": 63488 00:12:35.847 }, 00:12:35.847 { 00:12:35.847 "name": "BaseBdev4", 00:12:35.847 "uuid": "d1c75b78-4a2e-11ef-9c8e-7947904e2597", 00:12:35.847 "is_configured": true, 00:12:35.847 "data_offset": 2048, 00:12:35.847 "data_size": 63488 00:12:35.847 } 00:12:35.847 ] 00:12:35.847 }' 00:12:35.847 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.847 02:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:36.128 02:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:36.128 [2024-07-25 02:37:22.995018] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.128 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:36.128 "name": "Existed_Raid", 00:12:36.128 "aliases": [ 00:12:36.128 "d04017a1-4a2e-11ef-9c8e-7947904e2597" 00:12:36.128 ], 00:12:36.128 "product_name": "Raid Volume", 00:12:36.128 "block_size": 512, 00:12:36.128 "num_blocks": 253952, 00:12:36.128 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:36.128 "assigned_rate_limits": { 00:12:36.128 "rw_ios_per_sec": 0, 00:12:36.128 "rw_mbytes_per_sec": 0, 00:12:36.128 "r_mbytes_per_sec": 0, 00:12:36.128 "w_mbytes_per_sec": 0 00:12:36.128 }, 00:12:36.128 "claimed": false, 00:12:36.128 "zoned": false, 00:12:36.128 "supported_io_types": { 00:12:36.128 "read": true, 00:12:36.128 "write": true, 00:12:36.128 "unmap": true, 00:12:36.128 "flush": true, 00:12:36.128 "reset": true, 00:12:36.128 "nvme_admin": false, 00:12:36.128 "nvme_io": false, 00:12:36.128 "nvme_io_md": false, 00:12:36.128 "write_zeroes": true, 00:12:36.128 "zcopy": false, 00:12:36.128 "get_zone_info": false, 00:12:36.128 "zone_management": false, 00:12:36.128 "zone_append": false, 00:12:36.128 "compare": false, 00:12:36.128 "compare_and_write": false, 00:12:36.128 "abort": false, 00:12:36.128 "seek_hole": false, 00:12:36.128 "seek_data": false, 00:12:36.128 "copy": false, 00:12:36.128 "nvme_iov_md": false 00:12:36.128 }, 00:12:36.128 "memory_domains": [ 00:12:36.128 { 00:12:36.128 "dma_device_id": "system", 00:12:36.128 "dma_device_type": 1 00:12:36.128 }, 00:12:36.128 { 00:12:36.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.128 "dma_device_type": 2 00:12:36.128 }, 00:12:36.128 { 00:12:36.128 "dma_device_id": "system", 00:12:36.128 "dma_device_type": 1 00:12:36.128 }, 00:12:36.128 { 00:12:36.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.128 "dma_device_type": 2 00:12:36.128 }, 00:12:36.128 { 00:12:36.128 "dma_device_id": "system", 00:12:36.128 "dma_device_type": 1 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.129 "dma_device_type": 2 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "dma_device_id": "system", 00:12:36.129 "dma_device_type": 1 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.129 "dma_device_type": 2 00:12:36.129 } 00:12:36.129 ], 00:12:36.129 "driver_specific": { 00:12:36.129 "raid": { 00:12:36.129 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:36.129 "strip_size_kb": 64, 00:12:36.129 "state": "online", 00:12:36.129 "raid_level": "concat", 00:12:36.129 "superblock": true, 00:12:36.129 "num_base_bdevs": 4, 00:12:36.129 "num_base_bdevs_discovered": 4, 00:12:36.129 "num_base_bdevs_operational": 4, 00:12:36.129 "base_bdevs_list": [ 00:12:36.129 { 00:12:36.129 "name": "BaseBdev1", 00:12:36.129 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:36.129 "is_configured": true, 00:12:36.129 "data_offset": 2048, 00:12:36.129 "data_size": 63488 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "name": "BaseBdev2", 00:12:36.129 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:36.129 "is_configured": true, 00:12:36.129 "data_offset": 2048, 00:12:36.129 "data_size": 63488 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "name": "BaseBdev3", 00:12:36.129 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:36.129 "is_configured": true, 00:12:36.129 "data_offset": 2048, 00:12:36.129 "data_size": 63488 00:12:36.129 }, 00:12:36.129 { 00:12:36.129 "name": "BaseBdev4", 00:12:36.129 "uuid": "d1c75b78-4a2e-11ef-9c8e-7947904e2597", 00:12:36.129 "is_configured": true, 00:12:36.129 "data_offset": 2048, 00:12:36.129 "data_size": 63488 00:12:36.129 } 00:12:36.129 ] 00:12:36.129 } 00:12:36.129 } 00:12:36.129 }' 00:12:36.129 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.129 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:36.129 BaseBdev2 00:12:36.129 BaseBdev3 00:12:36.129 BaseBdev4' 00:12:36.129 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.405 "name": "BaseBdev1", 00:12:36.405 "aliases": [ 00:12:36.405 "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597" 00:12:36.405 ], 00:12:36.405 "product_name": "Malloc disk", 00:12:36.405 "block_size": 512, 00:12:36.405 "num_blocks": 65536, 00:12:36.405 "uuid": "cf8d3fc0-4a2e-11ef-9c8e-7947904e2597", 00:12:36.405 "assigned_rate_limits": { 00:12:36.405 "rw_ios_per_sec": 0, 00:12:36.405 "rw_mbytes_per_sec": 0, 00:12:36.405 "r_mbytes_per_sec": 0, 00:12:36.405 "w_mbytes_per_sec": 0 00:12:36.405 }, 00:12:36.405 "claimed": true, 00:12:36.405 "claim_type": "exclusive_write", 00:12:36.405 "zoned": false, 00:12:36.405 "supported_io_types": { 00:12:36.405 "read": true, 00:12:36.405 "write": true, 00:12:36.405 "unmap": true, 00:12:36.405 "flush": true, 00:12:36.405 "reset": true, 00:12:36.405 "nvme_admin": false, 00:12:36.405 "nvme_io": false, 00:12:36.405 "nvme_io_md": false, 00:12:36.405 "write_zeroes": true, 00:12:36.405 "zcopy": true, 00:12:36.405 "get_zone_info": false, 00:12:36.405 "zone_management": false, 00:12:36.405 "zone_append": false, 00:12:36.405 "compare": false, 00:12:36.405 "compare_and_write": false, 00:12:36.405 "abort": true, 00:12:36.405 "seek_hole": false, 00:12:36.405 "seek_data": false, 00:12:36.405 "copy": true, 00:12:36.405 "nvme_iov_md": false 00:12:36.405 }, 00:12:36.405 "memory_domains": [ 00:12:36.405 { 00:12:36.405 "dma_device_id": "system", 00:12:36.405 "dma_device_type": 1 00:12:36.405 }, 00:12:36.405 { 00:12:36.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.405 "dma_device_type": 2 00:12:36.405 } 00:12:36.405 ], 00:12:36.405 "driver_specific": {} 00:12:36.405 }' 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:36.405 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.665 "name": "BaseBdev2", 00:12:36.665 "aliases": [ 00:12:36.665 "d0a088c8-4a2e-11ef-9c8e-7947904e2597" 00:12:36.665 ], 00:12:36.665 "product_name": "Malloc disk", 00:12:36.665 "block_size": 512, 00:12:36.665 "num_blocks": 65536, 00:12:36.665 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:36.665 "assigned_rate_limits": { 00:12:36.665 "rw_ios_per_sec": 0, 00:12:36.665 "rw_mbytes_per_sec": 0, 00:12:36.665 "r_mbytes_per_sec": 0, 00:12:36.665 "w_mbytes_per_sec": 0 00:12:36.665 }, 00:12:36.665 "claimed": true, 00:12:36.665 "claim_type": "exclusive_write", 00:12:36.665 "zoned": false, 00:12:36.665 "supported_io_types": { 00:12:36.665 "read": true, 00:12:36.665 "write": true, 00:12:36.665 "unmap": true, 00:12:36.665 "flush": true, 00:12:36.665 "reset": true, 00:12:36.665 "nvme_admin": false, 00:12:36.665 "nvme_io": false, 00:12:36.665 "nvme_io_md": false, 00:12:36.665 "write_zeroes": true, 00:12:36.665 "zcopy": true, 00:12:36.665 "get_zone_info": false, 00:12:36.665 "zone_management": false, 00:12:36.665 "zone_append": false, 00:12:36.665 "compare": false, 00:12:36.665 "compare_and_write": false, 00:12:36.665 "abort": true, 00:12:36.665 "seek_hole": false, 00:12:36.665 "seek_data": false, 00:12:36.665 "copy": true, 00:12:36.665 "nvme_iov_md": false 00:12:36.665 }, 00:12:36.665 "memory_domains": [ 00:12:36.665 { 00:12:36.665 "dma_device_id": "system", 00:12:36.665 "dma_device_type": 1 00:12:36.665 }, 00:12:36.665 { 00:12:36.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.665 "dma_device_type": 2 00:12:36.665 } 00:12:36.665 ], 00:12:36.665 "driver_specific": {} 00:12:36.665 }' 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:36.665 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:36.925 "name": "BaseBdev3", 00:12:36.925 "aliases": [ 00:12:36.925 "d131cec6-4a2e-11ef-9c8e-7947904e2597" 00:12:36.925 ], 00:12:36.925 "product_name": "Malloc disk", 00:12:36.925 "block_size": 512, 00:12:36.925 "num_blocks": 65536, 00:12:36.925 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:36.925 "assigned_rate_limits": { 00:12:36.925 "rw_ios_per_sec": 0, 00:12:36.925 "rw_mbytes_per_sec": 0, 00:12:36.925 "r_mbytes_per_sec": 0, 00:12:36.925 "w_mbytes_per_sec": 0 00:12:36.925 }, 00:12:36.925 "claimed": true, 00:12:36.925 "claim_type": "exclusive_write", 00:12:36.925 "zoned": false, 00:12:36.925 "supported_io_types": { 00:12:36.925 "read": true, 00:12:36.925 "write": true, 00:12:36.925 "unmap": true, 00:12:36.925 "flush": true, 00:12:36.925 "reset": true, 00:12:36.925 "nvme_admin": false, 00:12:36.925 "nvme_io": false, 00:12:36.925 "nvme_io_md": false, 00:12:36.925 "write_zeroes": true, 00:12:36.925 "zcopy": true, 00:12:36.925 "get_zone_info": false, 00:12:36.925 "zone_management": false, 00:12:36.925 "zone_append": false, 00:12:36.925 "compare": false, 00:12:36.925 "compare_and_write": false, 00:12:36.925 "abort": true, 00:12:36.925 "seek_hole": false, 00:12:36.925 "seek_data": false, 00:12:36.925 "copy": true, 00:12:36.925 "nvme_iov_md": false 00:12:36.925 }, 00:12:36.925 "memory_domains": [ 00:12:36.925 { 00:12:36.925 "dma_device_id": "system", 00:12:36.925 "dma_device_type": 1 00:12:36.925 }, 00:12:36.925 { 00:12:36.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.925 "dma_device_type": 2 00:12:36.925 } 00:12:36.925 ], 00:12:36.925 "driver_specific": {} 00:12:36.925 }' 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.925 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.185 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.185 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.185 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.185 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:37.185 02:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:37.185 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:37.185 "name": "BaseBdev4", 00:12:37.185 "aliases": [ 00:12:37.185 "d1c75b78-4a2e-11ef-9c8e-7947904e2597" 00:12:37.185 ], 00:12:37.185 "product_name": "Malloc disk", 00:12:37.185 "block_size": 512, 00:12:37.185 "num_blocks": 65536, 00:12:37.185 "uuid": "d1c75b78-4a2e-11ef-9c8e-7947904e2597", 00:12:37.185 "assigned_rate_limits": { 00:12:37.185 "rw_ios_per_sec": 0, 00:12:37.185 "rw_mbytes_per_sec": 0, 00:12:37.185 "r_mbytes_per_sec": 0, 00:12:37.185 "w_mbytes_per_sec": 0 00:12:37.185 }, 00:12:37.185 "claimed": true, 00:12:37.185 "claim_type": "exclusive_write", 00:12:37.185 "zoned": false, 00:12:37.185 "supported_io_types": { 00:12:37.185 "read": true, 00:12:37.185 "write": true, 00:12:37.185 "unmap": true, 00:12:37.185 "flush": true, 00:12:37.185 "reset": true, 00:12:37.185 "nvme_admin": false, 00:12:37.185 "nvme_io": false, 00:12:37.185 "nvme_io_md": false, 00:12:37.185 "write_zeroes": true, 00:12:37.185 "zcopy": true, 00:12:37.185 "get_zone_info": false, 00:12:37.185 "zone_management": false, 00:12:37.185 "zone_append": false, 00:12:37.185 "compare": false, 00:12:37.185 "compare_and_write": false, 00:12:37.185 "abort": true, 00:12:37.185 "seek_hole": false, 00:12:37.185 "seek_data": false, 00:12:37.185 "copy": true, 00:12:37.185 "nvme_iov_md": false 00:12:37.185 }, 00:12:37.185 "memory_domains": [ 00:12:37.185 { 00:12:37.185 "dma_device_id": "system", 00:12:37.185 "dma_device_type": 1 00:12:37.185 }, 00:12:37.185 { 00:12:37.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.185 "dma_device_type": 2 00:12:37.185 } 00:12:37.185 ], 00:12:37.185 "driver_specific": {} 00:12:37.185 }' 00:12:37.185 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.186 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:37.445 [2024-07-25 02:37:24.283178] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.445 [2024-07-25 02:37:24.283193] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.445 [2024-07-25 02:37:24.283203] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.445 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.705 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.705 "name": "Existed_Raid", 00:12:37.705 "uuid": "d04017a1-4a2e-11ef-9c8e-7947904e2597", 00:12:37.705 "strip_size_kb": 64, 00:12:37.705 "state": "offline", 00:12:37.705 "raid_level": "concat", 00:12:37.705 "superblock": true, 00:12:37.705 "num_base_bdevs": 4, 00:12:37.706 "num_base_bdevs_discovered": 3, 00:12:37.706 "num_base_bdevs_operational": 3, 00:12:37.706 "base_bdevs_list": [ 00:12:37.706 { 00:12:37.706 "name": null, 00:12:37.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.706 "is_configured": false, 00:12:37.706 "data_offset": 2048, 00:12:37.706 "data_size": 63488 00:12:37.706 }, 00:12:37.706 { 00:12:37.706 "name": "BaseBdev2", 00:12:37.706 "uuid": "d0a088c8-4a2e-11ef-9c8e-7947904e2597", 00:12:37.706 "is_configured": true, 00:12:37.706 "data_offset": 2048, 00:12:37.706 "data_size": 63488 00:12:37.706 }, 00:12:37.706 { 00:12:37.706 "name": "BaseBdev3", 00:12:37.706 "uuid": "d131cec6-4a2e-11ef-9c8e-7947904e2597", 00:12:37.706 "is_configured": true, 00:12:37.706 "data_offset": 2048, 00:12:37.706 "data_size": 63488 00:12:37.706 }, 00:12:37.706 { 00:12:37.706 "name": "BaseBdev4", 00:12:37.706 "uuid": "d1c75b78-4a2e-11ef-9c8e-7947904e2597", 00:12:37.706 "is_configured": true, 00:12:37.706 "data_offset": 2048, 00:12:37.706 "data_size": 63488 00:12:37.706 } 00:12:37.706 ] 00:12:37.706 }' 00:12:37.706 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.706 02:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.965 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:37.965 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:37.965 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.965 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:38.225 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:38.225 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.225 02:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:38.225 [2024-07-25 02:37:25.091875] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.225 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:38.225 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:38.225 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.225 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:38.484 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:38.484 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.484 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:38.743 [2024-07-25 02:37:25.460549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.743 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:38.743 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:38.743 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.743 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:38.743 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:38.744 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.744 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:39.003 [2024-07-25 02:37:25.797245] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:39.003 [2024-07-25 02:37:25.797263] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b1f16a34a00 name Existed_Raid, state offline 00:12:39.003 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:39.003 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:39.003 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.003 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:39.262 02:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.521 BaseBdev2 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:39.521 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.781 [ 00:12:39.781 { 00:12:39.781 "name": "BaseBdev2", 00:12:39.781 "aliases": [ 00:12:39.781 "d43eef5f-4a2e-11ef-9c8e-7947904e2597" 00:12:39.781 ], 00:12:39.781 "product_name": "Malloc disk", 00:12:39.781 "block_size": 512, 00:12:39.781 "num_blocks": 65536, 00:12:39.781 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:39.781 "assigned_rate_limits": { 00:12:39.781 "rw_ios_per_sec": 0, 00:12:39.781 "rw_mbytes_per_sec": 0, 00:12:39.781 "r_mbytes_per_sec": 0, 00:12:39.781 "w_mbytes_per_sec": 0 00:12:39.781 }, 00:12:39.781 "claimed": false, 00:12:39.781 "zoned": false, 00:12:39.781 "supported_io_types": { 00:12:39.781 "read": true, 00:12:39.781 "write": true, 00:12:39.781 "unmap": true, 00:12:39.781 "flush": true, 00:12:39.781 "reset": true, 00:12:39.781 "nvme_admin": false, 00:12:39.781 "nvme_io": false, 00:12:39.781 "nvme_io_md": false, 00:12:39.781 "write_zeroes": true, 00:12:39.781 "zcopy": true, 00:12:39.781 "get_zone_info": false, 00:12:39.781 "zone_management": false, 00:12:39.781 "zone_append": false, 00:12:39.781 "compare": false, 00:12:39.781 "compare_and_write": false, 00:12:39.781 "abort": true, 00:12:39.781 "seek_hole": false, 00:12:39.781 "seek_data": false, 00:12:39.781 "copy": true, 00:12:39.781 "nvme_iov_md": false 00:12:39.781 }, 00:12:39.781 "memory_domains": [ 00:12:39.781 { 00:12:39.781 "dma_device_id": "system", 00:12:39.781 "dma_device_type": 1 00:12:39.781 }, 00:12:39.781 { 00:12:39.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.781 "dma_device_type": 2 00:12:39.781 } 00:12:39.781 ], 00:12:39.781 "driver_specific": {} 00:12:39.781 } 00:12:39.781 ] 00:12:39.781 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:39.781 02:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:39.781 02:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:39.781 02:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.040 BaseBdev3 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.040 02:37:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.300 [ 00:12:40.300 { 00:12:40.300 "name": "BaseBdev3", 00:12:40.300 "aliases": [ 00:12:40.300 "d4928fc7-4a2e-11ef-9c8e-7947904e2597" 00:12:40.300 ], 00:12:40.300 "product_name": "Malloc disk", 00:12:40.300 "block_size": 512, 00:12:40.300 "num_blocks": 65536, 00:12:40.300 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:40.300 "assigned_rate_limits": { 00:12:40.300 "rw_ios_per_sec": 0, 00:12:40.300 "rw_mbytes_per_sec": 0, 00:12:40.300 "r_mbytes_per_sec": 0, 00:12:40.300 "w_mbytes_per_sec": 0 00:12:40.300 }, 00:12:40.300 "claimed": false, 00:12:40.300 "zoned": false, 00:12:40.300 "supported_io_types": { 00:12:40.300 "read": true, 00:12:40.300 "write": true, 00:12:40.300 "unmap": true, 00:12:40.300 "flush": true, 00:12:40.300 "reset": true, 00:12:40.300 "nvme_admin": false, 00:12:40.300 "nvme_io": false, 00:12:40.300 "nvme_io_md": false, 00:12:40.300 "write_zeroes": true, 00:12:40.300 "zcopy": true, 00:12:40.300 "get_zone_info": false, 00:12:40.300 "zone_management": false, 00:12:40.300 "zone_append": false, 00:12:40.300 "compare": false, 00:12:40.300 "compare_and_write": false, 00:12:40.300 "abort": true, 00:12:40.300 "seek_hole": false, 00:12:40.300 "seek_data": false, 00:12:40.300 "copy": true, 00:12:40.300 "nvme_iov_md": false 00:12:40.300 }, 00:12:40.300 "memory_domains": [ 00:12:40.300 { 00:12:40.300 "dma_device_id": "system", 00:12:40.300 "dma_device_type": 1 00:12:40.300 }, 00:12:40.300 { 00:12:40.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.300 "dma_device_type": 2 00:12:40.300 } 00:12:40.300 ], 00:12:40.300 "driver_specific": {} 00:12:40.300 } 00:12:40.300 ] 00:12:40.300 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:40.300 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:40.300 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:40.300 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:40.559 BaseBdev4 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.559 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:40.819 [ 00:12:40.819 { 00:12:40.819 "name": "BaseBdev4", 00:12:40.819 "aliases": [ 00:12:40.819 "d4e59395-4a2e-11ef-9c8e-7947904e2597" 00:12:40.819 ], 00:12:40.819 "product_name": "Malloc disk", 00:12:40.819 "block_size": 512, 00:12:40.819 "num_blocks": 65536, 00:12:40.819 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:40.819 "assigned_rate_limits": { 00:12:40.819 "rw_ios_per_sec": 0, 00:12:40.819 "rw_mbytes_per_sec": 0, 00:12:40.819 "r_mbytes_per_sec": 0, 00:12:40.819 "w_mbytes_per_sec": 0 00:12:40.819 }, 00:12:40.819 "claimed": false, 00:12:40.819 "zoned": false, 00:12:40.819 "supported_io_types": { 00:12:40.819 "read": true, 00:12:40.819 "write": true, 00:12:40.819 "unmap": true, 00:12:40.819 "flush": true, 00:12:40.819 "reset": true, 00:12:40.819 "nvme_admin": false, 00:12:40.819 "nvme_io": false, 00:12:40.819 "nvme_io_md": false, 00:12:40.819 "write_zeroes": true, 00:12:40.819 "zcopy": true, 00:12:40.819 "get_zone_info": false, 00:12:40.819 "zone_management": false, 00:12:40.819 "zone_append": false, 00:12:40.819 "compare": false, 00:12:40.819 "compare_and_write": false, 00:12:40.819 "abort": true, 00:12:40.819 "seek_hole": false, 00:12:40.819 "seek_data": false, 00:12:40.819 "copy": true, 00:12:40.819 "nvme_iov_md": false 00:12:40.819 }, 00:12:40.819 "memory_domains": [ 00:12:40.819 { 00:12:40.819 "dma_device_id": "system", 00:12:40.819 "dma_device_type": 1 00:12:40.819 }, 00:12:40.819 { 00:12:40.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.819 "dma_device_type": 2 00:12:40.819 } 00:12:40.819 ], 00:12:40.819 "driver_specific": {} 00:12:40.819 } 00:12:40.819 ] 00:12:40.819 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:40.819 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:40.819 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:40.819 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:41.079 [2024-07-25 02:37:27.798153] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.079 [2024-07-25 02:37:27.798189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.079 [2024-07-25 02:37:27.798195] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.079 [2024-07-25 02:37:27.798606] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.079 [2024-07-25 02:37:27.798621] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.079 "name": "Existed_Raid", 00:12:41.079 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:41.079 "strip_size_kb": 64, 00:12:41.079 "state": "configuring", 00:12:41.079 "raid_level": "concat", 00:12:41.079 "superblock": true, 00:12:41.079 "num_base_bdevs": 4, 00:12:41.079 "num_base_bdevs_discovered": 3, 00:12:41.079 "num_base_bdevs_operational": 4, 00:12:41.079 "base_bdevs_list": [ 00:12:41.079 { 00:12:41.079 "name": "BaseBdev1", 00:12:41.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.079 "is_configured": false, 00:12:41.079 "data_offset": 0, 00:12:41.079 "data_size": 0 00:12:41.079 }, 00:12:41.079 { 00:12:41.079 "name": "BaseBdev2", 00:12:41.079 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:41.079 "is_configured": true, 00:12:41.079 "data_offset": 2048, 00:12:41.079 "data_size": 63488 00:12:41.079 }, 00:12:41.079 { 00:12:41.079 "name": "BaseBdev3", 00:12:41.079 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:41.079 "is_configured": true, 00:12:41.079 "data_offset": 2048, 00:12:41.079 "data_size": 63488 00:12:41.079 }, 00:12:41.079 { 00:12:41.079 "name": "BaseBdev4", 00:12:41.079 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:41.079 "is_configured": true, 00:12:41.079 "data_offset": 2048, 00:12:41.079 "data_size": 63488 00:12:41.079 } 00:12:41.079 ] 00:12:41.079 }' 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.079 02:37:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:41.648 [2024-07-25 02:37:28.410200] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.648 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.908 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.908 "name": "Existed_Raid", 00:12:41.908 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:41.908 "strip_size_kb": 64, 00:12:41.908 "state": "configuring", 00:12:41.908 "raid_level": "concat", 00:12:41.908 "superblock": true, 00:12:41.908 "num_base_bdevs": 4, 00:12:41.908 "num_base_bdevs_discovered": 2, 00:12:41.908 "num_base_bdevs_operational": 4, 00:12:41.908 "base_bdevs_list": [ 00:12:41.908 { 00:12:41.908 "name": "BaseBdev1", 00:12:41.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.908 "is_configured": false, 00:12:41.908 "data_offset": 0, 00:12:41.908 "data_size": 0 00:12:41.908 }, 00:12:41.908 { 00:12:41.908 "name": null, 00:12:41.908 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:41.908 "is_configured": false, 00:12:41.908 "data_offset": 2048, 00:12:41.908 "data_size": 63488 00:12:41.908 }, 00:12:41.908 { 00:12:41.908 "name": "BaseBdev3", 00:12:41.908 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:41.908 "is_configured": true, 00:12:41.908 "data_offset": 2048, 00:12:41.908 "data_size": 63488 00:12:41.908 }, 00:12:41.908 { 00:12:41.908 "name": "BaseBdev4", 00:12:41.908 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:41.908 "is_configured": true, 00:12:41.908 "data_offset": 2048, 00:12:41.908 "data_size": 63488 00:12:41.908 } 00:12:41.908 ] 00:12:41.908 }' 00:12:41.908 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.908 02:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.168 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.168 02:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.168 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:42.168 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.428 [2024-07-25 02:37:29.222377] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.428 BaseBdev1 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:42.428 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.688 [ 00:12:42.688 { 00:12:42.688 "name": "BaseBdev1", 00:12:42.688 "aliases": [ 00:12:42.688 "d611e5ea-4a2e-11ef-9c8e-7947904e2597" 00:12:42.688 ], 00:12:42.688 "product_name": "Malloc disk", 00:12:42.688 "block_size": 512, 00:12:42.688 "num_blocks": 65536, 00:12:42.688 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:42.688 "assigned_rate_limits": { 00:12:42.688 "rw_ios_per_sec": 0, 00:12:42.688 "rw_mbytes_per_sec": 0, 00:12:42.688 "r_mbytes_per_sec": 0, 00:12:42.688 "w_mbytes_per_sec": 0 00:12:42.688 }, 00:12:42.688 "claimed": true, 00:12:42.688 "claim_type": "exclusive_write", 00:12:42.688 "zoned": false, 00:12:42.688 "supported_io_types": { 00:12:42.688 "read": true, 00:12:42.688 "write": true, 00:12:42.688 "unmap": true, 00:12:42.688 "flush": true, 00:12:42.688 "reset": true, 00:12:42.688 "nvme_admin": false, 00:12:42.688 "nvme_io": false, 00:12:42.688 "nvme_io_md": false, 00:12:42.688 "write_zeroes": true, 00:12:42.688 "zcopy": true, 00:12:42.688 "get_zone_info": false, 00:12:42.688 "zone_management": false, 00:12:42.688 "zone_append": false, 00:12:42.688 "compare": false, 00:12:42.688 "compare_and_write": false, 00:12:42.688 "abort": true, 00:12:42.688 "seek_hole": false, 00:12:42.688 "seek_data": false, 00:12:42.688 "copy": true, 00:12:42.688 "nvme_iov_md": false 00:12:42.688 }, 00:12:42.688 "memory_domains": [ 00:12:42.688 { 00:12:42.688 "dma_device_id": "system", 00:12:42.688 "dma_device_type": 1 00:12:42.688 }, 00:12:42.688 { 00:12:42.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.688 "dma_device_type": 2 00:12:42.688 } 00:12:42.688 ], 00:12:42.688 "driver_specific": {} 00:12:42.688 } 00:12:42.688 ] 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.688 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.948 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.948 "name": "Existed_Raid", 00:12:42.948 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:42.948 "strip_size_kb": 64, 00:12:42.948 "state": "configuring", 00:12:42.948 "raid_level": "concat", 00:12:42.948 "superblock": true, 00:12:42.948 "num_base_bdevs": 4, 00:12:42.948 "num_base_bdevs_discovered": 3, 00:12:42.948 "num_base_bdevs_operational": 4, 00:12:42.948 "base_bdevs_list": [ 00:12:42.948 { 00:12:42.948 "name": "BaseBdev1", 00:12:42.948 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:42.948 "is_configured": true, 00:12:42.948 "data_offset": 2048, 00:12:42.948 "data_size": 63488 00:12:42.948 }, 00:12:42.948 { 00:12:42.948 "name": null, 00:12:42.948 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:42.948 "is_configured": false, 00:12:42.948 "data_offset": 2048, 00:12:42.948 "data_size": 63488 00:12:42.948 }, 00:12:42.948 { 00:12:42.948 "name": "BaseBdev3", 00:12:42.948 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:42.948 "is_configured": true, 00:12:42.948 "data_offset": 2048, 00:12:42.948 "data_size": 63488 00:12:42.948 }, 00:12:42.948 { 00:12:42.948 "name": "BaseBdev4", 00:12:42.948 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:42.948 "is_configured": true, 00:12:42.948 "data_offset": 2048, 00:12:42.948 "data_size": 63488 00:12:42.948 } 00:12:42.948 ] 00:12:42.948 }' 00:12:42.948 02:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.948 02:37:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.208 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.208 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:43.468 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:43.468 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:43.728 [2024-07-25 02:37:30.382390] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.728 "name": "Existed_Raid", 00:12:43.728 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:43.728 "strip_size_kb": 64, 00:12:43.728 "state": "configuring", 00:12:43.728 "raid_level": "concat", 00:12:43.728 "superblock": true, 00:12:43.728 "num_base_bdevs": 4, 00:12:43.728 "num_base_bdevs_discovered": 2, 00:12:43.728 "num_base_bdevs_operational": 4, 00:12:43.728 "base_bdevs_list": [ 00:12:43.728 { 00:12:43.728 "name": "BaseBdev1", 00:12:43.728 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:43.728 "is_configured": true, 00:12:43.728 "data_offset": 2048, 00:12:43.728 "data_size": 63488 00:12:43.728 }, 00:12:43.728 { 00:12:43.728 "name": null, 00:12:43.728 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:43.728 "is_configured": false, 00:12:43.728 "data_offset": 2048, 00:12:43.728 "data_size": 63488 00:12:43.728 }, 00:12:43.728 { 00:12:43.728 "name": null, 00:12:43.728 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:43.728 "is_configured": false, 00:12:43.728 "data_offset": 2048, 00:12:43.728 "data_size": 63488 00:12:43.728 }, 00:12:43.728 { 00:12:43.728 "name": "BaseBdev4", 00:12:43.728 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:43.728 "is_configured": true, 00:12:43.728 "data_offset": 2048, 00:12:43.728 "data_size": 63488 00:12:43.728 } 00:12:43.728 ] 00:12:43.728 }' 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.728 02:37:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.988 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.988 02:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:44.248 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:44.248 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:44.508 [2024-07-25 02:37:31.206467] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.508 "name": "Existed_Raid", 00:12:44.508 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:44.508 "strip_size_kb": 64, 00:12:44.508 "state": "configuring", 00:12:44.508 "raid_level": "concat", 00:12:44.508 "superblock": true, 00:12:44.508 "num_base_bdevs": 4, 00:12:44.508 "num_base_bdevs_discovered": 3, 00:12:44.508 "num_base_bdevs_operational": 4, 00:12:44.508 "base_bdevs_list": [ 00:12:44.508 { 00:12:44.508 "name": "BaseBdev1", 00:12:44.508 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:44.508 "is_configured": true, 00:12:44.508 "data_offset": 2048, 00:12:44.508 "data_size": 63488 00:12:44.508 }, 00:12:44.508 { 00:12:44.508 "name": null, 00:12:44.508 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:44.508 "is_configured": false, 00:12:44.508 "data_offset": 2048, 00:12:44.508 "data_size": 63488 00:12:44.508 }, 00:12:44.508 { 00:12:44.508 "name": "BaseBdev3", 00:12:44.508 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:44.508 "is_configured": true, 00:12:44.508 "data_offset": 2048, 00:12:44.508 "data_size": 63488 00:12:44.508 }, 00:12:44.508 { 00:12:44.508 "name": "BaseBdev4", 00:12:44.508 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:44.508 "is_configured": true, 00:12:44.508 "data_offset": 2048, 00:12:44.508 "data_size": 63488 00:12:44.508 } 00:12:44.508 ] 00:12:44.508 }' 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.508 02:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.768 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.028 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.028 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:45.028 02:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:45.288 [2024-07-25 02:37:32.014541] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.288 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.548 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.548 "name": "Existed_Raid", 00:12:45.548 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:45.548 "strip_size_kb": 64, 00:12:45.548 "state": "configuring", 00:12:45.548 "raid_level": "concat", 00:12:45.548 "superblock": true, 00:12:45.548 "num_base_bdevs": 4, 00:12:45.548 "num_base_bdevs_discovered": 2, 00:12:45.548 "num_base_bdevs_operational": 4, 00:12:45.548 "base_bdevs_list": [ 00:12:45.548 { 00:12:45.548 "name": null, 00:12:45.548 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:45.548 "is_configured": false, 00:12:45.548 "data_offset": 2048, 00:12:45.548 "data_size": 63488 00:12:45.548 }, 00:12:45.548 { 00:12:45.548 "name": null, 00:12:45.548 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:45.548 "is_configured": false, 00:12:45.548 "data_offset": 2048, 00:12:45.548 "data_size": 63488 00:12:45.548 }, 00:12:45.548 { 00:12:45.548 "name": "BaseBdev3", 00:12:45.548 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:45.548 "is_configured": true, 00:12:45.548 "data_offset": 2048, 00:12:45.548 "data_size": 63488 00:12:45.548 }, 00:12:45.548 { 00:12:45.548 "name": "BaseBdev4", 00:12:45.548 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:45.548 "is_configured": true, 00:12:45.548 "data_offset": 2048, 00:12:45.548 "data_size": 63488 00:12:45.548 } 00:12:45.548 ] 00:12:45.548 }' 00:12:45.548 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.548 02:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.808 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.808 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:45.808 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:45.808 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:46.068 [2024-07-25 02:37:32.815379] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.068 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.069 02:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.329 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.329 "name": "Existed_Raid", 00:12:46.329 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:46.329 "strip_size_kb": 64, 00:12:46.329 "state": "configuring", 00:12:46.329 "raid_level": "concat", 00:12:46.329 "superblock": true, 00:12:46.329 "num_base_bdevs": 4, 00:12:46.329 "num_base_bdevs_discovered": 3, 00:12:46.329 "num_base_bdevs_operational": 4, 00:12:46.329 "base_bdevs_list": [ 00:12:46.329 { 00:12:46.329 "name": null, 00:12:46.329 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:46.329 "is_configured": false, 00:12:46.329 "data_offset": 2048, 00:12:46.329 "data_size": 63488 00:12:46.329 }, 00:12:46.329 { 00:12:46.329 "name": "BaseBdev2", 00:12:46.329 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:46.329 "is_configured": true, 00:12:46.329 "data_offset": 2048, 00:12:46.329 "data_size": 63488 00:12:46.329 }, 00:12:46.329 { 00:12:46.329 "name": "BaseBdev3", 00:12:46.329 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:46.329 "is_configured": true, 00:12:46.329 "data_offset": 2048, 00:12:46.329 "data_size": 63488 00:12:46.329 }, 00:12:46.329 { 00:12:46.329 "name": "BaseBdev4", 00:12:46.329 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:46.329 "is_configured": true, 00:12:46.329 "data_offset": 2048, 00:12:46.329 "data_size": 63488 00:12:46.329 } 00:12:46.329 ] 00:12:46.329 }' 00:12:46.329 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.329 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.591 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.591 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:46.591 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:46.591 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:46.591 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.851 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d611e5ea-4a2e-11ef-9c8e-7947904e2597 00:12:47.110 [2024-07-25 02:37:33.803547] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:47.110 [2024-07-25 02:37:33.803578] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b1f16a34f00 00:12:47.110 [2024-07-25 02:37:33.803597] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:47.110 [2024-07-25 02:37:33.803613] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b1f16a97e20 00:12:47.110 [2024-07-25 02:37:33.803641] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b1f16a34f00 00:12:47.110 [2024-07-25 02:37:33.803644] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3b1f16a34f00 00:12:47.110 [2024-07-25 02:37:33.803658] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.110 NewBaseBdev 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:47.110 02:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:47.370 [ 00:12:47.370 { 00:12:47.370 "name": "NewBaseBdev", 00:12:47.370 "aliases": [ 00:12:47.370 "d611e5ea-4a2e-11ef-9c8e-7947904e2597" 00:12:47.370 ], 00:12:47.370 "product_name": "Malloc disk", 00:12:47.370 "block_size": 512, 00:12:47.370 "num_blocks": 65536, 00:12:47.370 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:47.370 "assigned_rate_limits": { 00:12:47.370 "rw_ios_per_sec": 0, 00:12:47.370 "rw_mbytes_per_sec": 0, 00:12:47.370 "r_mbytes_per_sec": 0, 00:12:47.370 "w_mbytes_per_sec": 0 00:12:47.370 }, 00:12:47.370 "claimed": true, 00:12:47.370 "claim_type": "exclusive_write", 00:12:47.370 "zoned": false, 00:12:47.370 "supported_io_types": { 00:12:47.370 "read": true, 00:12:47.370 "write": true, 00:12:47.370 "unmap": true, 00:12:47.370 "flush": true, 00:12:47.370 "reset": true, 00:12:47.370 "nvme_admin": false, 00:12:47.370 "nvme_io": false, 00:12:47.370 "nvme_io_md": false, 00:12:47.370 "write_zeroes": true, 00:12:47.370 "zcopy": true, 00:12:47.370 "get_zone_info": false, 00:12:47.370 "zone_management": false, 00:12:47.370 "zone_append": false, 00:12:47.370 "compare": false, 00:12:47.370 "compare_and_write": false, 00:12:47.370 "abort": true, 00:12:47.370 "seek_hole": false, 00:12:47.370 "seek_data": false, 00:12:47.370 "copy": true, 00:12:47.370 "nvme_iov_md": false 00:12:47.370 }, 00:12:47.370 "memory_domains": [ 00:12:47.370 { 00:12:47.370 "dma_device_id": "system", 00:12:47.370 "dma_device_type": 1 00:12:47.370 }, 00:12:47.370 { 00:12:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.370 "dma_device_type": 2 00:12:47.370 } 00:12:47.370 ], 00:12:47.370 "driver_specific": {} 00:12:47.370 } 00:12:47.370 ] 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.370 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.630 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.630 "name": "Existed_Raid", 00:12:47.630 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:47.630 "strip_size_kb": 64, 00:12:47.630 "state": "online", 00:12:47.630 "raid_level": "concat", 00:12:47.630 "superblock": true, 00:12:47.630 "num_base_bdevs": 4, 00:12:47.630 "num_base_bdevs_discovered": 4, 00:12:47.630 "num_base_bdevs_operational": 4, 00:12:47.630 "base_bdevs_list": [ 00:12:47.630 { 00:12:47.630 "name": "NewBaseBdev", 00:12:47.630 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:47.630 "is_configured": true, 00:12:47.630 "data_offset": 2048, 00:12:47.630 "data_size": 63488 00:12:47.630 }, 00:12:47.630 { 00:12:47.630 "name": "BaseBdev2", 00:12:47.630 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:47.630 "is_configured": true, 00:12:47.630 "data_offset": 2048, 00:12:47.630 "data_size": 63488 00:12:47.630 }, 00:12:47.630 { 00:12:47.630 "name": "BaseBdev3", 00:12:47.630 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:47.630 "is_configured": true, 00:12:47.630 "data_offset": 2048, 00:12:47.630 "data_size": 63488 00:12:47.630 }, 00:12:47.630 { 00:12:47.630 "name": "BaseBdev4", 00:12:47.630 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:47.630 "is_configured": true, 00:12:47.630 "data_offset": 2048, 00:12:47.630 "data_size": 63488 00:12:47.630 } 00:12:47.630 ] 00:12:47.630 }' 00:12:47.630 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.630 02:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:47.890 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:48.149 [2024-07-25 02:37:34.795581] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.149 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:48.149 "name": "Existed_Raid", 00:12:48.149 "aliases": [ 00:12:48.149 "d5389743-4a2e-11ef-9c8e-7947904e2597" 00:12:48.149 ], 00:12:48.149 "product_name": "Raid Volume", 00:12:48.149 "block_size": 512, 00:12:48.149 "num_blocks": 253952, 00:12:48.149 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:48.149 "assigned_rate_limits": { 00:12:48.149 "rw_ios_per_sec": 0, 00:12:48.149 "rw_mbytes_per_sec": 0, 00:12:48.149 "r_mbytes_per_sec": 0, 00:12:48.149 "w_mbytes_per_sec": 0 00:12:48.149 }, 00:12:48.149 "claimed": false, 00:12:48.149 "zoned": false, 00:12:48.149 "supported_io_types": { 00:12:48.149 "read": true, 00:12:48.149 "write": true, 00:12:48.149 "unmap": true, 00:12:48.149 "flush": true, 00:12:48.149 "reset": true, 00:12:48.149 "nvme_admin": false, 00:12:48.149 "nvme_io": false, 00:12:48.149 "nvme_io_md": false, 00:12:48.149 "write_zeroes": true, 00:12:48.149 "zcopy": false, 00:12:48.149 "get_zone_info": false, 00:12:48.149 "zone_management": false, 00:12:48.149 "zone_append": false, 00:12:48.149 "compare": false, 00:12:48.149 "compare_and_write": false, 00:12:48.149 "abort": false, 00:12:48.149 "seek_hole": false, 00:12:48.149 "seek_data": false, 00:12:48.149 "copy": false, 00:12:48.149 "nvme_iov_md": false 00:12:48.149 }, 00:12:48.149 "memory_domains": [ 00:12:48.149 { 00:12:48.149 "dma_device_id": "system", 00:12:48.149 "dma_device_type": 1 00:12:48.149 }, 00:12:48.149 { 00:12:48.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.149 "dma_device_type": 2 00:12:48.149 }, 00:12:48.149 { 00:12:48.149 "dma_device_id": "system", 00:12:48.149 "dma_device_type": 1 00:12:48.149 }, 00:12:48.149 { 00:12:48.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.149 "dma_device_type": 2 00:12:48.149 }, 00:12:48.150 { 00:12:48.150 "dma_device_id": "system", 00:12:48.150 "dma_device_type": 1 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.150 "dma_device_type": 2 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "dma_device_id": "system", 00:12:48.150 "dma_device_type": 1 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.150 "dma_device_type": 2 00:12:48.150 } 00:12:48.150 ], 00:12:48.150 "driver_specific": { 00:12:48.150 "raid": { 00:12:48.150 "uuid": "d5389743-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "strip_size_kb": 64, 00:12:48.150 "state": "online", 00:12:48.150 "raid_level": "concat", 00:12:48.150 "superblock": true, 00:12:48.150 "num_base_bdevs": 4, 00:12:48.150 "num_base_bdevs_discovered": 4, 00:12:48.150 "num_base_bdevs_operational": 4, 00:12:48.150 "base_bdevs_list": [ 00:12:48.150 { 00:12:48.150 "name": "NewBaseBdev", 00:12:48.150 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "is_configured": true, 00:12:48.150 "data_offset": 2048, 00:12:48.150 "data_size": 63488 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "name": "BaseBdev2", 00:12:48.150 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "is_configured": true, 00:12:48.150 "data_offset": 2048, 00:12:48.150 "data_size": 63488 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "name": "BaseBdev3", 00:12:48.150 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "is_configured": true, 00:12:48.150 "data_offset": 2048, 00:12:48.150 "data_size": 63488 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "name": "BaseBdev4", 00:12:48.150 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "is_configured": true, 00:12:48.150 "data_offset": 2048, 00:12:48.150 "data_size": 63488 00:12:48.150 } 00:12:48.150 ] 00:12:48.150 } 00:12:48.150 } 00:12:48.150 }' 00:12:48.150 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.150 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:48.150 BaseBdev2 00:12:48.150 BaseBdev3 00:12:48.150 BaseBdev4' 00:12:48.150 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.150 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:48.150 02:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.150 "name": "NewBaseBdev", 00:12:48.150 "aliases": [ 00:12:48.150 "d611e5ea-4a2e-11ef-9c8e-7947904e2597" 00:12:48.150 ], 00:12:48.150 "product_name": "Malloc disk", 00:12:48.150 "block_size": 512, 00:12:48.150 "num_blocks": 65536, 00:12:48.150 "uuid": "d611e5ea-4a2e-11ef-9c8e-7947904e2597", 00:12:48.150 "assigned_rate_limits": { 00:12:48.150 "rw_ios_per_sec": 0, 00:12:48.150 "rw_mbytes_per_sec": 0, 00:12:48.150 "r_mbytes_per_sec": 0, 00:12:48.150 "w_mbytes_per_sec": 0 00:12:48.150 }, 00:12:48.150 "claimed": true, 00:12:48.150 "claim_type": "exclusive_write", 00:12:48.150 "zoned": false, 00:12:48.150 "supported_io_types": { 00:12:48.150 "read": true, 00:12:48.150 "write": true, 00:12:48.150 "unmap": true, 00:12:48.150 "flush": true, 00:12:48.150 "reset": true, 00:12:48.150 "nvme_admin": false, 00:12:48.150 "nvme_io": false, 00:12:48.150 "nvme_io_md": false, 00:12:48.150 "write_zeroes": true, 00:12:48.150 "zcopy": true, 00:12:48.150 "get_zone_info": false, 00:12:48.150 "zone_management": false, 00:12:48.150 "zone_append": false, 00:12:48.150 "compare": false, 00:12:48.150 "compare_and_write": false, 00:12:48.150 "abort": true, 00:12:48.150 "seek_hole": false, 00:12:48.150 "seek_data": false, 00:12:48.150 "copy": true, 00:12:48.150 "nvme_iov_md": false 00:12:48.150 }, 00:12:48.150 "memory_domains": [ 00:12:48.150 { 00:12:48.150 "dma_device_id": "system", 00:12:48.150 "dma_device_type": 1 00:12:48.150 }, 00:12:48.150 { 00:12:48.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.150 "dma_device_type": 2 00:12:48.150 } 00:12:48.150 ], 00:12:48.150 "driver_specific": {} 00:12:48.150 }' 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.150 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.416 "name": "BaseBdev2", 00:12:48.416 "aliases": [ 00:12:48.416 "d43eef5f-4a2e-11ef-9c8e-7947904e2597" 00:12:48.416 ], 00:12:48.416 "product_name": "Malloc disk", 00:12:48.416 "block_size": 512, 00:12:48.416 "num_blocks": 65536, 00:12:48.416 "uuid": "d43eef5f-4a2e-11ef-9c8e-7947904e2597", 00:12:48.416 "assigned_rate_limits": { 00:12:48.416 "rw_ios_per_sec": 0, 00:12:48.416 "rw_mbytes_per_sec": 0, 00:12:48.416 "r_mbytes_per_sec": 0, 00:12:48.416 "w_mbytes_per_sec": 0 00:12:48.416 }, 00:12:48.416 "claimed": true, 00:12:48.416 "claim_type": "exclusive_write", 00:12:48.416 "zoned": false, 00:12:48.416 "supported_io_types": { 00:12:48.416 "read": true, 00:12:48.416 "write": true, 00:12:48.416 "unmap": true, 00:12:48.416 "flush": true, 00:12:48.416 "reset": true, 00:12:48.416 "nvme_admin": false, 00:12:48.416 "nvme_io": false, 00:12:48.416 "nvme_io_md": false, 00:12:48.416 "write_zeroes": true, 00:12:48.416 "zcopy": true, 00:12:48.416 "get_zone_info": false, 00:12:48.416 "zone_management": false, 00:12:48.416 "zone_append": false, 00:12:48.416 "compare": false, 00:12:48.416 "compare_and_write": false, 00:12:48.416 "abort": true, 00:12:48.416 "seek_hole": false, 00:12:48.416 "seek_data": false, 00:12:48.416 "copy": true, 00:12:48.416 "nvme_iov_md": false 00:12:48.416 }, 00:12:48.416 "memory_domains": [ 00:12:48.416 { 00:12:48.416 "dma_device_id": "system", 00:12:48.416 "dma_device_type": 1 00:12:48.416 }, 00:12:48.416 { 00:12:48.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.416 "dma_device_type": 2 00:12:48.416 } 00:12:48.416 ], 00:12:48.416 "driver_specific": {} 00:12:48.416 }' 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.416 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.675 "name": "BaseBdev3", 00:12:48.675 "aliases": [ 00:12:48.675 "d4928fc7-4a2e-11ef-9c8e-7947904e2597" 00:12:48.675 ], 00:12:48.675 "product_name": "Malloc disk", 00:12:48.675 "block_size": 512, 00:12:48.675 "num_blocks": 65536, 00:12:48.675 "uuid": "d4928fc7-4a2e-11ef-9c8e-7947904e2597", 00:12:48.675 "assigned_rate_limits": { 00:12:48.675 "rw_ios_per_sec": 0, 00:12:48.675 "rw_mbytes_per_sec": 0, 00:12:48.675 "r_mbytes_per_sec": 0, 00:12:48.675 "w_mbytes_per_sec": 0 00:12:48.675 }, 00:12:48.675 "claimed": true, 00:12:48.675 "claim_type": "exclusive_write", 00:12:48.675 "zoned": false, 00:12:48.675 "supported_io_types": { 00:12:48.675 "read": true, 00:12:48.675 "write": true, 00:12:48.675 "unmap": true, 00:12:48.675 "flush": true, 00:12:48.675 "reset": true, 00:12:48.675 "nvme_admin": false, 00:12:48.675 "nvme_io": false, 00:12:48.675 "nvme_io_md": false, 00:12:48.675 "write_zeroes": true, 00:12:48.675 "zcopy": true, 00:12:48.675 "get_zone_info": false, 00:12:48.675 "zone_management": false, 00:12:48.675 "zone_append": false, 00:12:48.675 "compare": false, 00:12:48.675 "compare_and_write": false, 00:12:48.675 "abort": true, 00:12:48.675 "seek_hole": false, 00:12:48.675 "seek_data": false, 00:12:48.675 "copy": true, 00:12:48.675 "nvme_iov_md": false 00:12:48.675 }, 00:12:48.675 "memory_domains": [ 00:12:48.675 { 00:12:48.675 "dma_device_id": "system", 00:12:48.675 "dma_device_type": 1 00:12:48.675 }, 00:12:48.675 { 00:12:48.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.675 "dma_device_type": 2 00:12:48.675 } 00:12:48.675 ], 00:12:48.675 "driver_specific": {} 00:12:48.675 }' 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.675 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:48.935 "name": "BaseBdev4", 00:12:48.935 "aliases": [ 00:12:48.935 "d4e59395-4a2e-11ef-9c8e-7947904e2597" 00:12:48.935 ], 00:12:48.935 "product_name": "Malloc disk", 00:12:48.935 "block_size": 512, 00:12:48.935 "num_blocks": 65536, 00:12:48.935 "uuid": "d4e59395-4a2e-11ef-9c8e-7947904e2597", 00:12:48.935 "assigned_rate_limits": { 00:12:48.935 "rw_ios_per_sec": 0, 00:12:48.935 "rw_mbytes_per_sec": 0, 00:12:48.935 "r_mbytes_per_sec": 0, 00:12:48.935 "w_mbytes_per_sec": 0 00:12:48.935 }, 00:12:48.935 "claimed": true, 00:12:48.935 "claim_type": "exclusive_write", 00:12:48.935 "zoned": false, 00:12:48.935 "supported_io_types": { 00:12:48.935 "read": true, 00:12:48.935 "write": true, 00:12:48.935 "unmap": true, 00:12:48.935 "flush": true, 00:12:48.935 "reset": true, 00:12:48.935 "nvme_admin": false, 00:12:48.935 "nvme_io": false, 00:12:48.935 "nvme_io_md": false, 00:12:48.935 "write_zeroes": true, 00:12:48.935 "zcopy": true, 00:12:48.935 "get_zone_info": false, 00:12:48.935 "zone_management": false, 00:12:48.935 "zone_append": false, 00:12:48.935 "compare": false, 00:12:48.935 "compare_and_write": false, 00:12:48.935 "abort": true, 00:12:48.935 "seek_hole": false, 00:12:48.935 "seek_data": false, 00:12:48.935 "copy": true, 00:12:48.935 "nvme_iov_md": false 00:12:48.935 }, 00:12:48.935 "memory_domains": [ 00:12:48.935 { 00:12:48.935 "dma_device_id": "system", 00:12:48.935 "dma_device_type": 1 00:12:48.935 }, 00:12:48.935 { 00:12:48.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.935 "dma_device_type": 2 00:12:48.935 } 00:12:48.935 ], 00:12:48.935 "driver_specific": {} 00:12:48.935 }' 00:12:48.935 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:49.194 02:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:49.455 [2024-07-25 02:37:36.099707] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.455 [2024-07-25 02:37:36.099723] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.455 [2024-07-25 02:37:36.099735] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.455 [2024-07-25 02:37:36.099762] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.455 [2024-07-25 02:37:36.099765] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b1f16a34f00 name Existed_Raid, state offline 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61088 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61088 ']' 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61088 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61088 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:49.455 killing process with pid 61088 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61088' 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61088 00:12:49.455 [2024-07-25 02:37:36.127951] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61088 00:12:49.455 [2024-07-25 02:37:36.146287] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:49.455 00:12:49.455 real 0m20.021s 00:12:49.455 user 0m36.206s 00:12:49.455 sys 0m3.237s 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.455 02:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.455 ************************************ 00:12:49.455 END TEST raid_state_function_test_sb 00:12:49.455 ************************************ 00:12:49.715 02:37:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:49.715 02:37:36 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:49.715 02:37:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:49.715 02:37:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.715 02:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 ************************************ 00:12:49.715 START TEST raid_superblock_test 00:12:49.715 ************************************ 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=61878 00:12:49.715 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 61878 /var/tmp/spdk-raid.sock 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 61878 ']' 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.716 02:37:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.716 [2024-07-25 02:37:36.389145] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:12:49.716 [2024-07-25 02:37:36.389504] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:49.975 EAL: TSC is not safe to use in SMP mode 00:12:49.975 EAL: TSC is not invariant 00:12:49.975 [2024-07-25 02:37:36.810481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.235 [2024-07-25 02:37:36.903328] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:50.235 [2024-07-25 02:37:36.904972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.235 [2024-07-25 02:37:36.905574] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.235 [2024-07-25 02:37:36.905585] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:50.495 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:50.755 malloc1 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:50.755 [2024-07-25 02:37:37.628484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:50.755 [2024-07-25 02:37:37.628519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.755 [2024-07-25 02:37:37.628527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8434780 00:12:50.755 [2024-07-25 02:37:37.628532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.755 [2024-07-25 02:37:37.629234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.755 [2024-07-25 02:37:37.629258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:50.755 pt1 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:50.755 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:51.015 malloc2 00:12:51.015 02:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:51.275 [2024-07-25 02:37:37.996518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:51.275 [2024-07-25 02:37:37.996555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.275 [2024-07-25 02:37:37.996562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8434c80 00:12:51.275 [2024-07-25 02:37:37.996567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.275 [2024-07-25 02:37:37.997050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.275 [2024-07-25 02:37:37.997074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:51.275 pt2 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:51.275 malloc3 00:12:51.275 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:51.535 [2024-07-25 02:37:38.336545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:51.535 [2024-07-25 02:37:38.336576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.535 [2024-07-25 02:37:38.336583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8435180 00:12:51.535 [2024-07-25 02:37:38.336588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.535 [2024-07-25 02:37:38.337051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.535 [2024-07-25 02:37:38.337074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:51.535 pt3 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:51.535 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:12:51.799 malloc4 00:12:51.799 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:51.799 [2024-07-25 02:37:38.700578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:51.799 [2024-07-25 02:37:38.700610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.799 [2024-07-25 02:37:38.700617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8435680 00:12:51.799 [2024-07-25 02:37:38.700623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.799 [2024-07-25 02:37:38.701059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.799 [2024-07-25 02:37:38.701084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:52.060 pt4 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:12:52.060 [2024-07-25 02:37:38.884603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:52.060 [2024-07-25 02:37:38.885033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.060 [2024-07-25 02:37:38.885053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:52.060 [2024-07-25 02:37:38.885060] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:52.060 [2024-07-25 02:37:38.885101] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d35b8435900 00:12:52.060 [2024-07-25 02:37:38.885106] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:52.060 [2024-07-25 02:37:38.885133] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d35b8497e20 00:12:52.060 [2024-07-25 02:37:38.885201] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d35b8435900 00:12:52.060 [2024-07-25 02:37:38.885204] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d35b8435900 00:12:52.060 [2024-07-25 02:37:38.885225] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.060 02:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.320 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.320 "name": "raid_bdev1", 00:12:52.320 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:52.320 "strip_size_kb": 64, 00:12:52.320 "state": "online", 00:12:52.320 "raid_level": "concat", 00:12:52.320 "superblock": true, 00:12:52.320 "num_base_bdevs": 4, 00:12:52.320 "num_base_bdevs_discovered": 4, 00:12:52.320 "num_base_bdevs_operational": 4, 00:12:52.320 "base_bdevs_list": [ 00:12:52.320 { 00:12:52.320 "name": "pt1", 00:12:52.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.320 "is_configured": true, 00:12:52.320 "data_offset": 2048, 00:12:52.320 "data_size": 63488 00:12:52.320 }, 00:12:52.320 { 00:12:52.320 "name": "pt2", 00:12:52.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.320 "is_configured": true, 00:12:52.320 "data_offset": 2048, 00:12:52.320 "data_size": 63488 00:12:52.320 }, 00:12:52.320 { 00:12:52.320 "name": "pt3", 00:12:52.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.320 "is_configured": true, 00:12:52.320 "data_offset": 2048, 00:12:52.320 "data_size": 63488 00:12:52.320 }, 00:12:52.320 { 00:12:52.320 "name": "pt4", 00:12:52.320 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.320 "is_configured": true, 00:12:52.320 "data_offset": 2048, 00:12:52.320 "data_size": 63488 00:12:52.320 } 00:12:52.320 ] 00:12:52.320 }' 00:12:52.320 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.320 02:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.579 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:52.580 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:52.840 [2024-07-25 02:37:39.520676] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:52.840 "name": "raid_bdev1", 00:12:52.840 "aliases": [ 00:12:52.840 "dbd43f98-4a2e-11ef-9c8e-7947904e2597" 00:12:52.840 ], 00:12:52.840 "product_name": "Raid Volume", 00:12:52.840 "block_size": 512, 00:12:52.840 "num_blocks": 253952, 00:12:52.840 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:52.840 "assigned_rate_limits": { 00:12:52.840 "rw_ios_per_sec": 0, 00:12:52.840 "rw_mbytes_per_sec": 0, 00:12:52.840 "r_mbytes_per_sec": 0, 00:12:52.840 "w_mbytes_per_sec": 0 00:12:52.840 }, 00:12:52.840 "claimed": false, 00:12:52.840 "zoned": false, 00:12:52.840 "supported_io_types": { 00:12:52.840 "read": true, 00:12:52.840 "write": true, 00:12:52.840 "unmap": true, 00:12:52.840 "flush": true, 00:12:52.840 "reset": true, 00:12:52.840 "nvme_admin": false, 00:12:52.840 "nvme_io": false, 00:12:52.840 "nvme_io_md": false, 00:12:52.840 "write_zeroes": true, 00:12:52.840 "zcopy": false, 00:12:52.840 "get_zone_info": false, 00:12:52.840 "zone_management": false, 00:12:52.840 "zone_append": false, 00:12:52.840 "compare": false, 00:12:52.840 "compare_and_write": false, 00:12:52.840 "abort": false, 00:12:52.840 "seek_hole": false, 00:12:52.840 "seek_data": false, 00:12:52.840 "copy": false, 00:12:52.840 "nvme_iov_md": false 00:12:52.840 }, 00:12:52.840 "memory_domains": [ 00:12:52.840 { 00:12:52.840 "dma_device_id": "system", 00:12:52.840 "dma_device_type": 1 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.840 "dma_device_type": 2 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "system", 00:12:52.840 "dma_device_type": 1 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.840 "dma_device_type": 2 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "system", 00:12:52.840 "dma_device_type": 1 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.840 "dma_device_type": 2 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "system", 00:12:52.840 "dma_device_type": 1 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.840 "dma_device_type": 2 00:12:52.840 } 00:12:52.840 ], 00:12:52.840 "driver_specific": { 00:12:52.840 "raid": { 00:12:52.840 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:52.840 "strip_size_kb": 64, 00:12:52.840 "state": "online", 00:12:52.840 "raid_level": "concat", 00:12:52.840 "superblock": true, 00:12:52.840 "num_base_bdevs": 4, 00:12:52.840 "num_base_bdevs_discovered": 4, 00:12:52.840 "num_base_bdevs_operational": 4, 00:12:52.840 "base_bdevs_list": [ 00:12:52.840 { 00:12:52.840 "name": "pt1", 00:12:52.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.840 "is_configured": true, 00:12:52.840 "data_offset": 2048, 00:12:52.840 "data_size": 63488 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "name": "pt2", 00:12:52.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.840 "is_configured": true, 00:12:52.840 "data_offset": 2048, 00:12:52.840 "data_size": 63488 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "name": "pt3", 00:12:52.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.840 "is_configured": true, 00:12:52.840 "data_offset": 2048, 00:12:52.840 "data_size": 63488 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "name": "pt4", 00:12:52.840 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.840 "is_configured": true, 00:12:52.840 "data_offset": 2048, 00:12:52.840 "data_size": 63488 00:12:52.840 } 00:12:52.840 ] 00:12:52.840 } 00:12:52.840 } 00:12:52.840 }' 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:52.840 pt2 00:12:52.840 pt3 00:12:52.840 pt4' 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:52.840 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:52.840 "name": "pt1", 00:12:52.840 "aliases": [ 00:12:52.840 "00000000-0000-0000-0000-000000000001" 00:12:52.840 ], 00:12:52.840 "product_name": "passthru", 00:12:52.840 "block_size": 512, 00:12:52.840 "num_blocks": 65536, 00:12:52.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:52.840 "assigned_rate_limits": { 00:12:52.840 "rw_ios_per_sec": 0, 00:12:52.840 "rw_mbytes_per_sec": 0, 00:12:52.840 "r_mbytes_per_sec": 0, 00:12:52.840 "w_mbytes_per_sec": 0 00:12:52.840 }, 00:12:52.840 "claimed": true, 00:12:52.840 "claim_type": "exclusive_write", 00:12:52.840 "zoned": false, 00:12:52.840 "supported_io_types": { 00:12:52.840 "read": true, 00:12:52.840 "write": true, 00:12:52.840 "unmap": true, 00:12:52.840 "flush": true, 00:12:52.840 "reset": true, 00:12:52.840 "nvme_admin": false, 00:12:52.840 "nvme_io": false, 00:12:52.840 "nvme_io_md": false, 00:12:52.840 "write_zeroes": true, 00:12:52.840 "zcopy": true, 00:12:52.840 "get_zone_info": false, 00:12:52.840 "zone_management": false, 00:12:52.840 "zone_append": false, 00:12:52.840 "compare": false, 00:12:52.840 "compare_and_write": false, 00:12:52.840 "abort": true, 00:12:52.840 "seek_hole": false, 00:12:52.840 "seek_data": false, 00:12:52.840 "copy": true, 00:12:52.840 "nvme_iov_md": false 00:12:52.840 }, 00:12:52.840 "memory_domains": [ 00:12:52.840 { 00:12:52.840 "dma_device_id": "system", 00:12:52.840 "dma_device_type": 1 00:12:52.840 }, 00:12:52.840 { 00:12:52.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.840 "dma_device_type": 2 00:12:52.840 } 00:12:52.840 ], 00:12:52.840 "driver_specific": { 00:12:52.840 "passthru": { 00:12:52.840 "name": "pt1", 00:12:52.840 "base_bdev_name": "malloc1" 00:12:52.840 } 00:12:52.840 } 00:12:52.840 }' 00:12:52.841 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.841 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:53.100 02:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.361 "name": "pt2", 00:12:53.361 "aliases": [ 00:12:53.361 "00000000-0000-0000-0000-000000000002" 00:12:53.361 ], 00:12:53.361 "product_name": "passthru", 00:12:53.361 "block_size": 512, 00:12:53.361 "num_blocks": 65536, 00:12:53.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:53.361 "assigned_rate_limits": { 00:12:53.361 "rw_ios_per_sec": 0, 00:12:53.361 "rw_mbytes_per_sec": 0, 00:12:53.361 "r_mbytes_per_sec": 0, 00:12:53.361 "w_mbytes_per_sec": 0 00:12:53.361 }, 00:12:53.361 "claimed": true, 00:12:53.361 "claim_type": "exclusive_write", 00:12:53.361 "zoned": false, 00:12:53.361 "supported_io_types": { 00:12:53.361 "read": true, 00:12:53.361 "write": true, 00:12:53.361 "unmap": true, 00:12:53.361 "flush": true, 00:12:53.361 "reset": true, 00:12:53.361 "nvme_admin": false, 00:12:53.361 "nvme_io": false, 00:12:53.361 "nvme_io_md": false, 00:12:53.361 "write_zeroes": true, 00:12:53.361 "zcopy": true, 00:12:53.361 "get_zone_info": false, 00:12:53.361 "zone_management": false, 00:12:53.361 "zone_append": false, 00:12:53.361 "compare": false, 00:12:53.361 "compare_and_write": false, 00:12:53.361 "abort": true, 00:12:53.361 "seek_hole": false, 00:12:53.361 "seek_data": false, 00:12:53.361 "copy": true, 00:12:53.361 "nvme_iov_md": false 00:12:53.361 }, 00:12:53.361 "memory_domains": [ 00:12:53.361 { 00:12:53.361 "dma_device_id": "system", 00:12:53.361 "dma_device_type": 1 00:12:53.361 }, 00:12:53.361 { 00:12:53.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.361 "dma_device_type": 2 00:12:53.361 } 00:12:53.361 ], 00:12:53.361 "driver_specific": { 00:12:53.361 "passthru": { 00:12:53.361 "name": "pt2", 00:12:53.361 "base_bdev_name": "malloc2" 00:12:53.361 } 00:12:53.361 } 00:12:53.361 }' 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:53.361 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.625 "name": "pt3", 00:12:53.625 "aliases": [ 00:12:53.625 "00000000-0000-0000-0000-000000000003" 00:12:53.625 ], 00:12:53.625 "product_name": "passthru", 00:12:53.625 "block_size": 512, 00:12:53.625 "num_blocks": 65536, 00:12:53.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:53.625 "assigned_rate_limits": { 00:12:53.625 "rw_ios_per_sec": 0, 00:12:53.625 "rw_mbytes_per_sec": 0, 00:12:53.625 "r_mbytes_per_sec": 0, 00:12:53.625 "w_mbytes_per_sec": 0 00:12:53.625 }, 00:12:53.625 "claimed": true, 00:12:53.625 "claim_type": "exclusive_write", 00:12:53.625 "zoned": false, 00:12:53.625 "supported_io_types": { 00:12:53.625 "read": true, 00:12:53.625 "write": true, 00:12:53.625 "unmap": true, 00:12:53.625 "flush": true, 00:12:53.625 "reset": true, 00:12:53.625 "nvme_admin": false, 00:12:53.625 "nvme_io": false, 00:12:53.625 "nvme_io_md": false, 00:12:53.625 "write_zeroes": true, 00:12:53.625 "zcopy": true, 00:12:53.625 "get_zone_info": false, 00:12:53.625 "zone_management": false, 00:12:53.625 "zone_append": false, 00:12:53.625 "compare": false, 00:12:53.625 "compare_and_write": false, 00:12:53.625 "abort": true, 00:12:53.625 "seek_hole": false, 00:12:53.625 "seek_data": false, 00:12:53.625 "copy": true, 00:12:53.625 "nvme_iov_md": false 00:12:53.625 }, 00:12:53.625 "memory_domains": [ 00:12:53.625 { 00:12:53.625 "dma_device_id": "system", 00:12:53.625 "dma_device_type": 1 00:12:53.625 }, 00:12:53.625 { 00:12:53.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.625 "dma_device_type": 2 00:12:53.625 } 00:12:53.625 ], 00:12:53.625 "driver_specific": { 00:12:53.625 "passthru": { 00:12:53.625 "name": "pt3", 00:12:53.625 "base_bdev_name": "malloc3" 00:12:53.625 } 00:12:53.625 } 00:12:53.625 }' 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:12:53.625 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.894 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.894 "name": "pt4", 00:12:53.894 "aliases": [ 00:12:53.894 "00000000-0000-0000-0000-000000000004" 00:12:53.894 ], 00:12:53.894 "product_name": "passthru", 00:12:53.894 "block_size": 512, 00:12:53.894 "num_blocks": 65536, 00:12:53.894 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:53.894 "assigned_rate_limits": { 00:12:53.894 "rw_ios_per_sec": 0, 00:12:53.894 "rw_mbytes_per_sec": 0, 00:12:53.894 "r_mbytes_per_sec": 0, 00:12:53.894 "w_mbytes_per_sec": 0 00:12:53.894 }, 00:12:53.894 "claimed": true, 00:12:53.894 "claim_type": "exclusive_write", 00:12:53.894 "zoned": false, 00:12:53.894 "supported_io_types": { 00:12:53.894 "read": true, 00:12:53.894 "write": true, 00:12:53.894 "unmap": true, 00:12:53.894 "flush": true, 00:12:53.894 "reset": true, 00:12:53.894 "nvme_admin": false, 00:12:53.894 "nvme_io": false, 00:12:53.894 "nvme_io_md": false, 00:12:53.894 "write_zeroes": true, 00:12:53.894 "zcopy": true, 00:12:53.894 "get_zone_info": false, 00:12:53.894 "zone_management": false, 00:12:53.894 "zone_append": false, 00:12:53.894 "compare": false, 00:12:53.894 "compare_and_write": false, 00:12:53.894 "abort": true, 00:12:53.894 "seek_hole": false, 00:12:53.894 "seek_data": false, 00:12:53.894 "copy": true, 00:12:53.894 "nvme_iov_md": false 00:12:53.894 }, 00:12:53.894 "memory_domains": [ 00:12:53.894 { 00:12:53.894 "dma_device_id": "system", 00:12:53.894 "dma_device_type": 1 00:12:53.894 }, 00:12:53.894 { 00:12:53.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.894 "dma_device_type": 2 00:12:53.894 } 00:12:53.894 ], 00:12:53.894 "driver_specific": { 00:12:53.894 "passthru": { 00:12:53.894 "name": "pt4", 00:12:53.894 "base_bdev_name": "malloc4" 00:12:53.895 } 00:12:53.895 } 00:12:53.895 }' 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:53.895 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:54.156 [2024-07-25 02:37:40.808797] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.156 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=dbd43f98-4a2e-11ef-9c8e-7947904e2597 00:12:54.156 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z dbd43f98-4a2e-11ef-9c8e-7947904e2597 ']' 00:12:54.156 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:54.156 [2024-07-25 02:37:40.968785] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:54.156 [2024-07-25 02:37:40.968796] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.156 [2024-07-25 02:37:40.968809] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.156 [2024-07-25 02:37:40.968837] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.156 [2024-07-25 02:37:40.968840] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d35b8435900 name raid_bdev1, state offline 00:12:54.156 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.156 02:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:54.415 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:54.415 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:54.415 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.415 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:54.675 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.675 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:54.675 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.675 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:54.935 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:54.935 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:12:55.194 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:55.194 02:37:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:55.194 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:55.453 [2024-07-25 02:37:42.240904] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.453 [2024-07-25 02:37:42.241399] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.453 [2024-07-25 02:37:42.241439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:55.453 [2024-07-25 02:37:42.241447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:55.453 [2024-07-25 02:37:42.241459] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.453 [2024-07-25 02:37:42.241491] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.453 [2024-07-25 02:37:42.241499] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:55.453 [2024-07-25 02:37:42.241505] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:55.453 [2024-07-25 02:37:42.241511] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.453 [2024-07-25 02:37:42.241515] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d35b8435680 name raid_bdev1, state configuring 00:12:55.453 request: 00:12:55.453 { 00:12:55.453 "name": "raid_bdev1", 00:12:55.453 "raid_level": "concat", 00:12:55.453 "base_bdevs": [ 00:12:55.453 "malloc1", 00:12:55.453 "malloc2", 00:12:55.453 "malloc3", 00:12:55.453 "malloc4" 00:12:55.453 ], 00:12:55.453 "strip_size_kb": 64, 00:12:55.453 "superblock": false, 00:12:55.453 "method": "bdev_raid_create", 00:12:55.453 "req_id": 1 00:12:55.453 } 00:12:55.453 Got JSON-RPC error response 00:12:55.453 response: 00:12:55.453 { 00:12:55.453 "code": -17, 00:12:55.453 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.453 } 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.453 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:55.711 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:55.711 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:55.711 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.711 [2024-07-25 02:37:42.604938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.711 [2024-07-25 02:37:42.604966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.711 [2024-07-25 02:37:42.604973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8435180 00:12:55.711 [2024-07-25 02:37:42.604978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.711 [2024-07-25 02:37:42.605539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.711 [2024-07-25 02:37:42.605567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.711 [2024-07-25 02:37:42.605584] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:55.711 [2024-07-25 02:37:42.605593] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.711 pt1 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.970 "name": "raid_bdev1", 00:12:55.970 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:55.970 "strip_size_kb": 64, 00:12:55.970 "state": "configuring", 00:12:55.970 "raid_level": "concat", 00:12:55.970 "superblock": true, 00:12:55.970 "num_base_bdevs": 4, 00:12:55.970 "num_base_bdevs_discovered": 1, 00:12:55.970 "num_base_bdevs_operational": 4, 00:12:55.970 "base_bdevs_list": [ 00:12:55.970 { 00:12:55.970 "name": "pt1", 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.970 "is_configured": true, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": null, 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.970 "is_configured": false, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": null, 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.970 "is_configured": false, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 }, 00:12:55.970 { 00:12:55.970 "name": null, 00:12:55.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.970 "is_configured": false, 00:12:55.970 "data_offset": 2048, 00:12:55.970 "data_size": 63488 00:12:55.970 } 00:12:55.970 ] 00:12:55.970 }' 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.970 02:37:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.229 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:12:56.229 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.487 [2024-07-25 02:37:43.240992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.487 [2024-07-25 02:37:43.241022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.487 [2024-07-25 02:37:43.241045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8434780 00:12:56.487 [2024-07-25 02:37:43.241050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.487 [2024-07-25 02:37:43.241122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.487 [2024-07-25 02:37:43.241128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.487 [2024-07-25 02:37:43.241141] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.487 [2024-07-25 02:37:43.241147] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.487 pt2 00:12:56.487 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:56.746 [2024-07-25 02:37:43.421008] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.746 "name": "raid_bdev1", 00:12:56.746 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:56.746 "strip_size_kb": 64, 00:12:56.746 "state": "configuring", 00:12:56.746 "raid_level": "concat", 00:12:56.746 "superblock": true, 00:12:56.746 "num_base_bdevs": 4, 00:12:56.746 "num_base_bdevs_discovered": 1, 00:12:56.746 "num_base_bdevs_operational": 4, 00:12:56.746 "base_bdevs_list": [ 00:12:56.746 { 00:12:56.746 "name": "pt1", 00:12:56.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.746 "is_configured": true, 00:12:56.746 "data_offset": 2048, 00:12:56.746 "data_size": 63488 00:12:56.746 }, 00:12:56.746 { 00:12:56.746 "name": null, 00:12:56.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.746 "is_configured": false, 00:12:56.746 "data_offset": 2048, 00:12:56.746 "data_size": 63488 00:12:56.746 }, 00:12:56.746 { 00:12:56.746 "name": null, 00:12:56.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.746 "is_configured": false, 00:12:56.746 "data_offset": 2048, 00:12:56.746 "data_size": 63488 00:12:56.746 }, 00:12:56.746 { 00:12:56.746 "name": null, 00:12:56.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.746 "is_configured": false, 00:12:56.746 "data_offset": 2048, 00:12:56.746 "data_size": 63488 00:12:56.746 } 00:12:56.746 ] 00:12:56.746 }' 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.746 02:37:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.005 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:57.005 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:57.005 02:37:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.265 [2024-07-25 02:37:44.041060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.265 [2024-07-25 02:37:44.041087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.265 [2024-07-25 02:37:44.041093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8434780 00:12:57.265 [2024-07-25 02:37:44.041098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.265 [2024-07-25 02:37:44.041178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.265 [2024-07-25 02:37:44.041185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.265 [2024-07-25 02:37:44.041198] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:57.265 [2024-07-25 02:37:44.041203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.265 pt2 00:12:57.265 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:57.265 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:57.265 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.525 [2024-07-25 02:37:44.225082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.525 [2024-07-25 02:37:44.225109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.525 [2024-07-25 02:37:44.225138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8435b80 00:12:57.525 [2024-07-25 02:37:44.225143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.525 [2024-07-25 02:37:44.225197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.525 [2024-07-25 02:37:44.225202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.525 [2024-07-25 02:37:44.225215] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:57.525 [2024-07-25 02:37:44.225219] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.525 pt3 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:57.525 [2024-07-25 02:37:44.381108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:57.525 [2024-07-25 02:37:44.381134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.525 [2024-07-25 02:37:44.381156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d35b8435900 00:12:57.525 [2024-07-25 02:37:44.381161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.525 [2024-07-25 02:37:44.381213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.525 [2024-07-25 02:37:44.381219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:57.525 [2024-07-25 02:37:44.381230] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:57.525 [2024-07-25 02:37:44.381235] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:57.525 [2024-07-25 02:37:44.381254] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d35b8434c80 00:12:57.525 [2024-07-25 02:37:44.381257] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:57.525 [2024-07-25 02:37:44.381283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d35b8497e20 00:12:57.525 [2024-07-25 02:37:44.381317] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d35b8434c80 00:12:57.525 [2024-07-25 02:37:44.381319] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d35b8434c80 00:12:57.525 [2024-07-25 02:37:44.381334] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.525 pt4 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.525 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.785 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.785 "name": "raid_bdev1", 00:12:57.785 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:57.785 "strip_size_kb": 64, 00:12:57.785 "state": "online", 00:12:57.785 "raid_level": "concat", 00:12:57.785 "superblock": true, 00:12:57.785 "num_base_bdevs": 4, 00:12:57.785 "num_base_bdevs_discovered": 4, 00:12:57.785 "num_base_bdevs_operational": 4, 00:12:57.785 "base_bdevs_list": [ 00:12:57.785 { 00:12:57.785 "name": "pt1", 00:12:57.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.785 "is_configured": true, 00:12:57.785 "data_offset": 2048, 00:12:57.785 "data_size": 63488 00:12:57.785 }, 00:12:57.785 { 00:12:57.785 "name": "pt2", 00:12:57.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.785 "is_configured": true, 00:12:57.785 "data_offset": 2048, 00:12:57.785 "data_size": 63488 00:12:57.785 }, 00:12:57.785 { 00:12:57.785 "name": "pt3", 00:12:57.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.785 "is_configured": true, 00:12:57.785 "data_offset": 2048, 00:12:57.785 "data_size": 63488 00:12:57.785 }, 00:12:57.785 { 00:12:57.785 "name": "pt4", 00:12:57.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.785 "is_configured": true, 00:12:57.785 "data_offset": 2048, 00:12:57.785 "data_size": 63488 00:12:57.785 } 00:12:57.785 ] 00:12:57.785 }' 00:12:57.785 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.785 02:37:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:58.045 02:37:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:58.305 [2024-07-25 02:37:45.025196] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:58.305 "name": "raid_bdev1", 00:12:58.305 "aliases": [ 00:12:58.305 "dbd43f98-4a2e-11ef-9c8e-7947904e2597" 00:12:58.305 ], 00:12:58.305 "product_name": "Raid Volume", 00:12:58.305 "block_size": 512, 00:12:58.305 "num_blocks": 253952, 00:12:58.305 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:58.305 "assigned_rate_limits": { 00:12:58.305 "rw_ios_per_sec": 0, 00:12:58.305 "rw_mbytes_per_sec": 0, 00:12:58.305 "r_mbytes_per_sec": 0, 00:12:58.305 "w_mbytes_per_sec": 0 00:12:58.305 }, 00:12:58.305 "claimed": false, 00:12:58.305 "zoned": false, 00:12:58.305 "supported_io_types": { 00:12:58.305 "read": true, 00:12:58.305 "write": true, 00:12:58.305 "unmap": true, 00:12:58.305 "flush": true, 00:12:58.305 "reset": true, 00:12:58.305 "nvme_admin": false, 00:12:58.305 "nvme_io": false, 00:12:58.305 "nvme_io_md": false, 00:12:58.305 "write_zeroes": true, 00:12:58.305 "zcopy": false, 00:12:58.305 "get_zone_info": false, 00:12:58.305 "zone_management": false, 00:12:58.305 "zone_append": false, 00:12:58.305 "compare": false, 00:12:58.305 "compare_and_write": false, 00:12:58.305 "abort": false, 00:12:58.305 "seek_hole": false, 00:12:58.305 "seek_data": false, 00:12:58.305 "copy": false, 00:12:58.305 "nvme_iov_md": false 00:12:58.305 }, 00:12:58.305 "memory_domains": [ 00:12:58.305 { 00:12:58.305 "dma_device_id": "system", 00:12:58.305 "dma_device_type": 1 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.305 "dma_device_type": 2 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "system", 00:12:58.305 "dma_device_type": 1 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.305 "dma_device_type": 2 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "system", 00:12:58.305 "dma_device_type": 1 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.305 "dma_device_type": 2 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "system", 00:12:58.305 "dma_device_type": 1 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.305 "dma_device_type": 2 00:12:58.305 } 00:12:58.305 ], 00:12:58.305 "driver_specific": { 00:12:58.305 "raid": { 00:12:58.305 "uuid": "dbd43f98-4a2e-11ef-9c8e-7947904e2597", 00:12:58.305 "strip_size_kb": 64, 00:12:58.305 "state": "online", 00:12:58.305 "raid_level": "concat", 00:12:58.305 "superblock": true, 00:12:58.305 "num_base_bdevs": 4, 00:12:58.305 "num_base_bdevs_discovered": 4, 00:12:58.305 "num_base_bdevs_operational": 4, 00:12:58.305 "base_bdevs_list": [ 00:12:58.305 { 00:12:58.305 "name": "pt1", 00:12:58.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.305 "is_configured": true, 00:12:58.305 "data_offset": 2048, 00:12:58.305 "data_size": 63488 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "name": "pt2", 00:12:58.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.305 "is_configured": true, 00:12:58.305 "data_offset": 2048, 00:12:58.305 "data_size": 63488 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "name": "pt3", 00:12:58.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.305 "is_configured": true, 00:12:58.305 "data_offset": 2048, 00:12:58.305 "data_size": 63488 00:12:58.305 }, 00:12:58.305 { 00:12:58.305 "name": "pt4", 00:12:58.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:58.305 "is_configured": true, 00:12:58.305 "data_offset": 2048, 00:12:58.305 "data_size": 63488 00:12:58.305 } 00:12:58.305 ] 00:12:58.305 } 00:12:58.305 } 00:12:58.305 }' 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:58.305 pt2 00:12:58.305 pt3 00:12:58.305 pt4' 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.305 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.565 "name": "pt1", 00:12:58.565 "aliases": [ 00:12:58.565 "00000000-0000-0000-0000-000000000001" 00:12:58.565 ], 00:12:58.565 "product_name": "passthru", 00:12:58.565 "block_size": 512, 00:12:58.565 "num_blocks": 65536, 00:12:58.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.565 "assigned_rate_limits": { 00:12:58.565 "rw_ios_per_sec": 0, 00:12:58.565 "rw_mbytes_per_sec": 0, 00:12:58.565 "r_mbytes_per_sec": 0, 00:12:58.565 "w_mbytes_per_sec": 0 00:12:58.565 }, 00:12:58.565 "claimed": true, 00:12:58.565 "claim_type": "exclusive_write", 00:12:58.565 "zoned": false, 00:12:58.565 "supported_io_types": { 00:12:58.565 "read": true, 00:12:58.565 "write": true, 00:12:58.565 "unmap": true, 00:12:58.565 "flush": true, 00:12:58.565 "reset": true, 00:12:58.565 "nvme_admin": false, 00:12:58.565 "nvme_io": false, 00:12:58.565 "nvme_io_md": false, 00:12:58.565 "write_zeroes": true, 00:12:58.565 "zcopy": true, 00:12:58.565 "get_zone_info": false, 00:12:58.565 "zone_management": false, 00:12:58.565 "zone_append": false, 00:12:58.565 "compare": false, 00:12:58.565 "compare_and_write": false, 00:12:58.565 "abort": true, 00:12:58.565 "seek_hole": false, 00:12:58.565 "seek_data": false, 00:12:58.565 "copy": true, 00:12:58.565 "nvme_iov_md": false 00:12:58.565 }, 00:12:58.565 "memory_domains": [ 00:12:58.565 { 00:12:58.565 "dma_device_id": "system", 00:12:58.565 "dma_device_type": 1 00:12:58.565 }, 00:12:58.565 { 00:12:58.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.565 "dma_device_type": 2 00:12:58.565 } 00:12:58.565 ], 00:12:58.565 "driver_specific": { 00:12:58.565 "passthru": { 00:12:58.565 "name": "pt1", 00:12:58.565 "base_bdev_name": "malloc1" 00:12:58.565 } 00:12:58.565 } 00:12:58.565 }' 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:58.565 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.825 "name": "pt2", 00:12:58.825 "aliases": [ 00:12:58.825 "00000000-0000-0000-0000-000000000002" 00:12:58.825 ], 00:12:58.825 "product_name": "passthru", 00:12:58.825 "block_size": 512, 00:12:58.825 "num_blocks": 65536, 00:12:58.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.825 "assigned_rate_limits": { 00:12:58.825 "rw_ios_per_sec": 0, 00:12:58.825 "rw_mbytes_per_sec": 0, 00:12:58.825 "r_mbytes_per_sec": 0, 00:12:58.825 "w_mbytes_per_sec": 0 00:12:58.825 }, 00:12:58.825 "claimed": true, 00:12:58.825 "claim_type": "exclusive_write", 00:12:58.825 "zoned": false, 00:12:58.825 "supported_io_types": { 00:12:58.825 "read": true, 00:12:58.825 "write": true, 00:12:58.825 "unmap": true, 00:12:58.825 "flush": true, 00:12:58.825 "reset": true, 00:12:58.825 "nvme_admin": false, 00:12:58.825 "nvme_io": false, 00:12:58.825 "nvme_io_md": false, 00:12:58.825 "write_zeroes": true, 00:12:58.825 "zcopy": true, 00:12:58.825 "get_zone_info": false, 00:12:58.825 "zone_management": false, 00:12:58.825 "zone_append": false, 00:12:58.825 "compare": false, 00:12:58.825 "compare_and_write": false, 00:12:58.825 "abort": true, 00:12:58.825 "seek_hole": false, 00:12:58.825 "seek_data": false, 00:12:58.825 "copy": true, 00:12:58.825 "nvme_iov_md": false 00:12:58.825 }, 00:12:58.825 "memory_domains": [ 00:12:58.825 { 00:12:58.825 "dma_device_id": "system", 00:12:58.825 "dma_device_type": 1 00:12:58.825 }, 00:12:58.825 { 00:12:58.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.825 "dma_device_type": 2 00:12:58.825 } 00:12:58.825 ], 00:12:58.825 "driver_specific": { 00:12:58.825 "passthru": { 00:12:58.825 "name": "pt2", 00:12:58.825 "base_bdev_name": "malloc2" 00:12:58.825 } 00:12:58.825 } 00:12:58.825 }' 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.825 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:58.826 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:59.085 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:59.085 "name": "pt3", 00:12:59.085 "aliases": [ 00:12:59.085 "00000000-0000-0000-0000-000000000003" 00:12:59.085 ], 00:12:59.086 "product_name": "passthru", 00:12:59.086 "block_size": 512, 00:12:59.086 "num_blocks": 65536, 00:12:59.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.086 "assigned_rate_limits": { 00:12:59.086 "rw_ios_per_sec": 0, 00:12:59.086 "rw_mbytes_per_sec": 0, 00:12:59.086 "r_mbytes_per_sec": 0, 00:12:59.086 "w_mbytes_per_sec": 0 00:12:59.086 }, 00:12:59.086 "claimed": true, 00:12:59.086 "claim_type": "exclusive_write", 00:12:59.086 "zoned": false, 00:12:59.086 "supported_io_types": { 00:12:59.086 "read": true, 00:12:59.086 "write": true, 00:12:59.086 "unmap": true, 00:12:59.086 "flush": true, 00:12:59.086 "reset": true, 00:12:59.086 "nvme_admin": false, 00:12:59.086 "nvme_io": false, 00:12:59.086 "nvme_io_md": false, 00:12:59.086 "write_zeroes": true, 00:12:59.086 "zcopy": true, 00:12:59.086 "get_zone_info": false, 00:12:59.086 "zone_management": false, 00:12:59.086 "zone_append": false, 00:12:59.086 "compare": false, 00:12:59.086 "compare_and_write": false, 00:12:59.086 "abort": true, 00:12:59.086 "seek_hole": false, 00:12:59.086 "seek_data": false, 00:12:59.086 "copy": true, 00:12:59.086 "nvme_iov_md": false 00:12:59.086 }, 00:12:59.086 "memory_domains": [ 00:12:59.086 { 00:12:59.086 "dma_device_id": "system", 00:12:59.086 "dma_device_type": 1 00:12:59.086 }, 00:12:59.086 { 00:12:59.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.086 "dma_device_type": 2 00:12:59.086 } 00:12:59.086 ], 00:12:59.086 "driver_specific": { 00:12:59.086 "passthru": { 00:12:59.086 "name": "pt3", 00:12:59.086 "base_bdev_name": "malloc3" 00:12:59.086 } 00:12:59.086 } 00:12:59.086 }' 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:12:59.086 02:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:59.346 "name": "pt4", 00:12:59.346 "aliases": [ 00:12:59.346 "00000000-0000-0000-0000-000000000004" 00:12:59.346 ], 00:12:59.346 "product_name": "passthru", 00:12:59.346 "block_size": 512, 00:12:59.346 "num_blocks": 65536, 00:12:59.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:59.346 "assigned_rate_limits": { 00:12:59.346 "rw_ios_per_sec": 0, 00:12:59.346 "rw_mbytes_per_sec": 0, 00:12:59.346 "r_mbytes_per_sec": 0, 00:12:59.346 "w_mbytes_per_sec": 0 00:12:59.346 }, 00:12:59.346 "claimed": true, 00:12:59.346 "claim_type": "exclusive_write", 00:12:59.346 "zoned": false, 00:12:59.346 "supported_io_types": { 00:12:59.346 "read": true, 00:12:59.346 "write": true, 00:12:59.346 "unmap": true, 00:12:59.346 "flush": true, 00:12:59.346 "reset": true, 00:12:59.346 "nvme_admin": false, 00:12:59.346 "nvme_io": false, 00:12:59.346 "nvme_io_md": false, 00:12:59.346 "write_zeroes": true, 00:12:59.346 "zcopy": true, 00:12:59.346 "get_zone_info": false, 00:12:59.346 "zone_management": false, 00:12:59.346 "zone_append": false, 00:12:59.346 "compare": false, 00:12:59.346 "compare_and_write": false, 00:12:59.346 "abort": true, 00:12:59.346 "seek_hole": false, 00:12:59.346 "seek_data": false, 00:12:59.346 "copy": true, 00:12:59.346 "nvme_iov_md": false 00:12:59.346 }, 00:12:59.346 "memory_domains": [ 00:12:59.346 { 00:12:59.346 "dma_device_id": "system", 00:12:59.346 "dma_device_type": 1 00:12:59.346 }, 00:12:59.346 { 00:12:59.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.346 "dma_device_type": 2 00:12:59.346 } 00:12:59.346 ], 00:12:59.346 "driver_specific": { 00:12:59.346 "passthru": { 00:12:59.346 "name": "pt4", 00:12:59.346 "base_bdev_name": "malloc4" 00:12:59.346 } 00:12:59.346 } 00:12:59.346 }' 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:59.346 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:59.606 [2024-07-25 02:37:46.349320] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' dbd43f98-4a2e-11ef-9c8e-7947904e2597 '!=' dbd43f98-4a2e-11ef-9c8e-7947904e2597 ']' 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 61878 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 61878 ']' 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 61878 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 61878 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:59.606 killing process with pid 61878 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61878' 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 61878 00:12:59.606 [2024-07-25 02:37:46.382481] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.606 [2024-07-25 02:37:46.382496] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.606 [2024-07-25 02:37:46.382520] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.606 [2024-07-25 02:37:46.382524] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d35b8434c80 name raid_bdev1, state offline 00:12:59.606 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 61878 00:12:59.606 [2024-07-25 02:37:46.401659] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.866 02:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:59.866 00:12:59.866 real 0m10.196s 00:12:59.866 user 0m17.710s 00:12:59.866 sys 0m1.958s 00:12:59.866 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.866 02:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.866 ************************************ 00:12:59.866 END TEST raid_superblock_test 00:12:59.866 ************************************ 00:12:59.866 02:37:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:59.866 02:37:46 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:59.866 02:37:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:59.866 02:37:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.866 02:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.866 ************************************ 00:12:59.866 START TEST raid_read_error_test 00:12:59.866 ************************************ 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:59.866 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TmM2m0YPcZ 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62267 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62267 /var/tmp/spdk-raid.sock 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62267 ']' 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.867 02:37:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.867 [2024-07-25 02:37:46.664962] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:12:59.867 [2024-07-25 02:37:46.665250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:00.436 EAL: TSC is not safe to use in SMP mode 00:13:00.436 EAL: TSC is not invariant 00:13:00.436 [2024-07-25 02:37:47.108079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.436 [2024-07-25 02:37:47.187830] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:00.436 [2024-07-25 02:37:47.189497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.436 [2024-07-25 02:37:47.190074] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.436 [2024-07-25 02:37:47.190087] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.695 02:37:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.695 02:37:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:00.695 02:37:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:00.695 02:37:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.954 BaseBdev1_malloc 00:13:00.954 02:37:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:01.213 true 00:13:01.213 02:37:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:01.213 [2024-07-25 02:37:48.073044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:01.213 [2024-07-25 02:37:48.073111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.213 [2024-07-25 02:37:48.073130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x717e6a34780 00:13:01.213 [2024-07-25 02:37:48.073136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.213 [2024-07-25 02:37:48.073612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.213 [2024-07-25 02:37:48.073638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.213 BaseBdev1 00:13:01.213 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.213 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.471 BaseBdev2_malloc 00:13:01.471 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:01.730 true 00:13:01.730 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:01.730 [2024-07-25 02:37:48.613097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:01.730 [2024-07-25 02:37:48.613136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.730 [2024-07-25 02:37:48.613173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x717e6a34c80 00:13:01.730 [2024-07-25 02:37:48.613179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.730 [2024-07-25 02:37:48.613616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.731 [2024-07-25 02:37:48.613659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.731 BaseBdev2 00:13:01.731 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:01.731 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.990 BaseBdev3_malloc 00:13:01.990 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:02.249 true 00:13:02.249 02:37:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:02.249 [2024-07-25 02:37:49.141147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:02.249 [2024-07-25 02:37:49.141196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.249 [2024-07-25 02:37:49.141216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x717e6a35180 00:13:02.249 [2024-07-25 02:37:49.141221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.249 [2024-07-25 02:37:49.141679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.249 [2024-07-25 02:37:49.141705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:02.249 BaseBdev3 00:13:02.509 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:02.509 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:02.509 BaseBdev4_malloc 00:13:02.509 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:02.768 true 00:13:02.768 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:03.027 [2024-07-25 02:37:49.677192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:03.027 [2024-07-25 02:37:49.677231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.027 [2024-07-25 02:37:49.677251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x717e6a35680 00:13:03.027 [2024-07-25 02:37:49.677257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.027 [2024-07-25 02:37:49.677713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.027 [2024-07-25 02:37:49.677741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:03.027 BaseBdev4 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:03.027 [2024-07-25 02:37:49.837214] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.027 [2024-07-25 02:37:49.837628] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.027 [2024-07-25 02:37:49.837651] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.027 [2024-07-25 02:37:49.837663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.027 [2024-07-25 02:37:49.837716] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x717e6a35900 00:13:03.027 [2024-07-25 02:37:49.837722] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:03.027 [2024-07-25 02:37:49.837753] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x717e6aa0e20 00:13:03.027 [2024-07-25 02:37:49.837804] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x717e6a35900 00:13:03.027 [2024-07-25 02:37:49.837808] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x717e6a35900 00:13:03.027 [2024-07-25 02:37:49.837824] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.027 02:37:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.286 02:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.286 "name": "raid_bdev1", 00:13:03.286 "uuid": "e25b7bfc-4a2e-11ef-9c8e-7947904e2597", 00:13:03.286 "strip_size_kb": 64, 00:13:03.286 "state": "online", 00:13:03.286 "raid_level": "concat", 00:13:03.286 "superblock": true, 00:13:03.286 "num_base_bdevs": 4, 00:13:03.286 "num_base_bdevs_discovered": 4, 00:13:03.286 "num_base_bdevs_operational": 4, 00:13:03.286 "base_bdevs_list": [ 00:13:03.286 { 00:13:03.286 "name": "BaseBdev1", 00:13:03.286 "uuid": "22da7b0e-eb4c-065f-b632-ef0d06af2ba7", 00:13:03.286 "is_configured": true, 00:13:03.286 "data_offset": 2048, 00:13:03.286 "data_size": 63488 00:13:03.286 }, 00:13:03.286 { 00:13:03.286 "name": "BaseBdev2", 00:13:03.286 "uuid": "4e98d6a3-9cf8-355c-858c-16b31fddcf7c", 00:13:03.286 "is_configured": true, 00:13:03.286 "data_offset": 2048, 00:13:03.286 "data_size": 63488 00:13:03.286 }, 00:13:03.286 { 00:13:03.286 "name": "BaseBdev3", 00:13:03.286 "uuid": "be5ba115-0e89-fa50-b0b0-66157ba1077c", 00:13:03.286 "is_configured": true, 00:13:03.286 "data_offset": 2048, 00:13:03.286 "data_size": 63488 00:13:03.286 }, 00:13:03.286 { 00:13:03.286 "name": "BaseBdev4", 00:13:03.286 "uuid": "e098fc43-7c21-f256-9552-ad13e3ebd3df", 00:13:03.286 "is_configured": true, 00:13:03.286 "data_offset": 2048, 00:13:03.286 "data_size": 63488 00:13:03.286 } 00:13:03.286 ] 00:13:03.286 }' 00:13:03.286 02:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.286 02:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.545 02:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:03.545 02:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:03.545 [2024-07-25 02:37:50.401313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x717e6aa0ec0 00:13:04.485 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.744 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.004 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.004 "name": "raid_bdev1", 00:13:05.004 "uuid": "e25b7bfc-4a2e-11ef-9c8e-7947904e2597", 00:13:05.004 "strip_size_kb": 64, 00:13:05.004 "state": "online", 00:13:05.004 "raid_level": "concat", 00:13:05.004 "superblock": true, 00:13:05.004 "num_base_bdevs": 4, 00:13:05.004 "num_base_bdevs_discovered": 4, 00:13:05.004 "num_base_bdevs_operational": 4, 00:13:05.004 "base_bdevs_list": [ 00:13:05.004 { 00:13:05.004 "name": "BaseBdev1", 00:13:05.004 "uuid": "22da7b0e-eb4c-065f-b632-ef0d06af2ba7", 00:13:05.004 "is_configured": true, 00:13:05.004 "data_offset": 2048, 00:13:05.004 "data_size": 63488 00:13:05.004 }, 00:13:05.004 { 00:13:05.004 "name": "BaseBdev2", 00:13:05.004 "uuid": "4e98d6a3-9cf8-355c-858c-16b31fddcf7c", 00:13:05.004 "is_configured": true, 00:13:05.004 "data_offset": 2048, 00:13:05.004 "data_size": 63488 00:13:05.004 }, 00:13:05.004 { 00:13:05.004 "name": "BaseBdev3", 00:13:05.004 "uuid": "be5ba115-0e89-fa50-b0b0-66157ba1077c", 00:13:05.004 "is_configured": true, 00:13:05.004 "data_offset": 2048, 00:13:05.004 "data_size": 63488 00:13:05.004 }, 00:13:05.004 { 00:13:05.004 "name": "BaseBdev4", 00:13:05.004 "uuid": "e098fc43-7c21-f256-9552-ad13e3ebd3df", 00:13:05.004 "is_configured": true, 00:13:05.004 "data_offset": 2048, 00:13:05.004 "data_size": 63488 00:13:05.004 } 00:13:05.004 ] 00:13:05.004 }' 00:13:05.004 02:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.004 02:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.263 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:05.523 [2024-07-25 02:37:52.190111] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.523 [2024-07-25 02:37:52.190140] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.523 [2024-07-25 02:37:52.190411] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.523 [2024-07-25 02:37:52.190420] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.523 [2024-07-25 02:37:52.190427] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.523 [2024-07-25 02:37:52.190431] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x717e6a35900 name raid_bdev1, state offline 00:13:05.523 0 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62267 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62267 ']' 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62267 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62267 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:05.523 killing process with pid 62267 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62267' 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62267 00:13:05.523 [2024-07-25 02:37:52.236903] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62267 00:13:05.523 [2024-07-25 02:37:52.255648] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TmM2m0YPcZ 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:05.523 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:13:05.783 00:13:05.783 real 0m5.793s 00:13:05.783 user 0m8.815s 00:13:05.783 sys 0m1.034s 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.783 02:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.783 ************************************ 00:13:05.783 END TEST raid_read_error_test 00:13:05.783 ************************************ 00:13:05.783 02:37:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:05.783 02:37:52 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:05.783 02:37:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:05.783 02:37:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.784 02:37:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 ************************************ 00:13:05.784 START TEST raid_write_error_test 00:13:05.784 ************************************ 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.cEwcvw1FB0 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62401 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62401 /var/tmp/spdk-raid.sock 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62401 ']' 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.784 02:37:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.784 [2024-07-25 02:37:52.517843] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:13:05.784 [2024-07-25 02:37:52.518095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:06.353 EAL: TSC is not safe to use in SMP mode 00:13:06.353 EAL: TSC is not invariant 00:13:06.353 [2024-07-25 02:37:52.956428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.353 [2024-07-25 02:37:53.048693] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:06.353 [2024-07-25 02:37:53.050364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.353 [2024-07-25 02:37:53.050927] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.353 [2024-07-25 02:37:53.050939] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.615 02:37:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.615 02:37:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:06.615 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:06.615 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.878 BaseBdev1_malloc 00:13:06.878 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:06.878 true 00:13:06.878 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:07.140 [2024-07-25 02:37:53.929811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:07.140 [2024-07-25 02:37:53.929855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.140 [2024-07-25 02:37:53.929877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213703834780 00:13:07.140 [2024-07-25 02:37:53.929883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.140 [2024-07-25 02:37:53.930348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.141 [2024-07-25 02:37:53.930374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:07.141 BaseBdev1 00:13:07.141 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:07.141 02:37:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:07.410 BaseBdev2_malloc 00:13:07.410 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:07.410 true 00:13:07.688 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:07.688 [2024-07-25 02:37:54.457853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:07.688 [2024-07-25 02:37:54.457893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.688 [2024-07-25 02:37:54.457928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213703834c80 00:13:07.688 [2024-07-25 02:37:54.457934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.688 [2024-07-25 02:37:54.458360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.688 [2024-07-25 02:37:54.458388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:07.688 BaseBdev2 00:13:07.688 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:07.688 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:07.963 BaseBdev3_malloc 00:13:07.963 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:07.963 true 00:13:07.963 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:08.223 [2024-07-25 02:37:54.981899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:08.223 [2024-07-25 02:37:54.981937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.223 [2024-07-25 02:37:54.981956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213703835180 00:13:08.223 [2024-07-25 02:37:54.981962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.223 [2024-07-25 02:37:54.982374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.223 [2024-07-25 02:37:54.982403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:08.223 BaseBdev3 00:13:08.223 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:08.223 02:37:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:08.482 BaseBdev4_malloc 00:13:08.483 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:08.483 true 00:13:08.483 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:08.742 [2024-07-25 02:37:55.529952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:08.743 [2024-07-25 02:37:55.529989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.743 [2024-07-25 02:37:55.530007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213703835680 00:13:08.743 [2024-07-25 02:37:55.530012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.743 [2024-07-25 02:37:55.530455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.743 [2024-07-25 02:37:55.530480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:08.743 BaseBdev4 00:13:08.743 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:09.002 [2024-07-25 02:37:55.709977] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.002 [2024-07-25 02:37:55.710369] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.002 [2024-07-25 02:37:55.710395] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.002 [2024-07-25 02:37:55.710405] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.002 [2024-07-25 02:37:55.710455] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x213703835900 00:13:09.002 [2024-07-25 02:37:55.710460] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:09.002 [2024-07-25 02:37:55.710489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2137038a0e20 00:13:09.002 [2024-07-25 02:37:55.710539] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213703835900 00:13:09.002 [2024-07-25 02:37:55.710542] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213703835900 00:13:09.002 [2024-07-25 02:37:55.710559] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:09.002 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.003 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.003 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.003 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.003 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.003 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.263 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.263 "name": "raid_bdev1", 00:13:09.263 "uuid": "e5db98cc-4a2e-11ef-9c8e-7947904e2597", 00:13:09.263 "strip_size_kb": 64, 00:13:09.263 "state": "online", 00:13:09.263 "raid_level": "concat", 00:13:09.263 "superblock": true, 00:13:09.263 "num_base_bdevs": 4, 00:13:09.263 "num_base_bdevs_discovered": 4, 00:13:09.263 "num_base_bdevs_operational": 4, 00:13:09.263 "base_bdevs_list": [ 00:13:09.263 { 00:13:09.263 "name": "BaseBdev1", 00:13:09.263 "uuid": "91e7a941-637f-f557-b58e-94b781d61b34", 00:13:09.263 "is_configured": true, 00:13:09.263 "data_offset": 2048, 00:13:09.263 "data_size": 63488 00:13:09.263 }, 00:13:09.263 { 00:13:09.263 "name": "BaseBdev2", 00:13:09.263 "uuid": "83a138de-3b01-2e51-8d66-439cd777e869", 00:13:09.263 "is_configured": true, 00:13:09.263 "data_offset": 2048, 00:13:09.263 "data_size": 63488 00:13:09.263 }, 00:13:09.263 { 00:13:09.263 "name": "BaseBdev3", 00:13:09.263 "uuid": "a155704b-65c0-665c-a096-83f606262f1f", 00:13:09.263 "is_configured": true, 00:13:09.263 "data_offset": 2048, 00:13:09.263 "data_size": 63488 00:13:09.263 }, 00:13:09.263 { 00:13:09.263 "name": "BaseBdev4", 00:13:09.263 "uuid": "c7419fc9-ad84-1456-85de-45c4ec02de09", 00:13:09.263 "is_configured": true, 00:13:09.263 "data_offset": 2048, 00:13:09.263 "data_size": 63488 00:13:09.263 } 00:13:09.263 ] 00:13:09.263 }' 00:13:09.263 02:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.263 02:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.523 02:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:09.523 02:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:09.523 [2024-07-25 02:37:56.274096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2137038a0ec0 00:13:10.463 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.723 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.983 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.983 "name": "raid_bdev1", 00:13:10.983 "uuid": "e5db98cc-4a2e-11ef-9c8e-7947904e2597", 00:13:10.983 "strip_size_kb": 64, 00:13:10.983 "state": "online", 00:13:10.983 "raid_level": "concat", 00:13:10.983 "superblock": true, 00:13:10.983 "num_base_bdevs": 4, 00:13:10.983 "num_base_bdevs_discovered": 4, 00:13:10.983 "num_base_bdevs_operational": 4, 00:13:10.983 "base_bdevs_list": [ 00:13:10.983 { 00:13:10.983 "name": "BaseBdev1", 00:13:10.983 "uuid": "91e7a941-637f-f557-b58e-94b781d61b34", 00:13:10.983 "is_configured": true, 00:13:10.983 "data_offset": 2048, 00:13:10.983 "data_size": 63488 00:13:10.983 }, 00:13:10.983 { 00:13:10.983 "name": "BaseBdev2", 00:13:10.983 "uuid": "83a138de-3b01-2e51-8d66-439cd777e869", 00:13:10.983 "is_configured": true, 00:13:10.983 "data_offset": 2048, 00:13:10.983 "data_size": 63488 00:13:10.983 }, 00:13:10.983 { 00:13:10.983 "name": "BaseBdev3", 00:13:10.983 "uuid": "a155704b-65c0-665c-a096-83f606262f1f", 00:13:10.983 "is_configured": true, 00:13:10.983 "data_offset": 2048, 00:13:10.983 "data_size": 63488 00:13:10.983 }, 00:13:10.983 { 00:13:10.983 "name": "BaseBdev4", 00:13:10.983 "uuid": "c7419fc9-ad84-1456-85de-45c4ec02de09", 00:13:10.983 "is_configured": true, 00:13:10.983 "data_offset": 2048, 00:13:10.983 "data_size": 63488 00:13:10.983 } 00:13:10.983 ] 00:13:10.983 }' 00:13:10.983 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.983 02:37:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.983 02:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:11.243 [2024-07-25 02:37:58.054823] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.243 [2024-07-25 02:37:58.054851] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.243 [2024-07-25 02:37:58.055145] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.243 [2024-07-25 02:37:58.055153] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.243 [2024-07-25 02:37:58.055160] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.243 [2024-07-25 02:37:58.055164] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213703835900 name raid_bdev1, state offline 00:13:11.243 0 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62401 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62401 ']' 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62401 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62401 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:11.243 killing process with pid 62401 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62401' 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62401 00:13:11.243 [2024-07-25 02:37:58.084975] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.243 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62401 00:13:11.243 [2024-07-25 02:37:58.103332] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.cEwcvw1FB0 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.56 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.56 != \0\.\0\0 ]] 00:13:11.504 00:13:11.504 real 0m5.785s 00:13:11.504 user 0m8.797s 00:13:11.504 sys 0m1.034s 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.504 02:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.504 ************************************ 00:13:11.504 END TEST raid_write_error_test 00:13:11.504 ************************************ 00:13:11.504 02:37:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:11.504 02:37:58 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:11.504 02:37:58 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:11.504 02:37:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:11.504 02:37:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.504 02:37:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.504 ************************************ 00:13:11.504 START TEST raid_state_function_test 00:13:11.504 ************************************ 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:11.504 Process raid pid: 62529 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62529 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62529' 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62529 /var/tmp/spdk-raid.sock 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 62529 ']' 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.504 02:37:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.504 [2024-07-25 02:37:58.361333] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:13:11.504 [2024-07-25 02:37:58.361664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:12.075 EAL: TSC is not safe to use in SMP mode 00:13:12.075 EAL: TSC is not invariant 00:13:12.075 [2024-07-25 02:37:58.783167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.075 [2024-07-25 02:37:58.877395] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:12.075 [2024-07-25 02:37:58.879041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.075 [2024-07-25 02:37:58.879625] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.075 [2024-07-25 02:37:58.879637] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:12.644 [2024-07-25 02:37:59.430583] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:12.644 [2024-07-25 02:37:59.430638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:12.644 [2024-07-25 02:37:59.430642] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:12.644 [2024-07-25 02:37:59.430647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:12.644 [2024-07-25 02:37:59.430650] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:12.644 [2024-07-25 02:37:59.430656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:12.644 [2024-07-25 02:37:59.430658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:12.644 [2024-07-25 02:37:59.430663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.644 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.904 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.904 "name": "Existed_Raid", 00:13:12.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.904 "strip_size_kb": 0, 00:13:12.904 "state": "configuring", 00:13:12.904 "raid_level": "raid1", 00:13:12.904 "superblock": false, 00:13:12.904 "num_base_bdevs": 4, 00:13:12.904 "num_base_bdevs_discovered": 0, 00:13:12.904 "num_base_bdevs_operational": 4, 00:13:12.904 "base_bdevs_list": [ 00:13:12.904 { 00:13:12.904 "name": "BaseBdev1", 00:13:12.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.904 "is_configured": false, 00:13:12.904 "data_offset": 0, 00:13:12.904 "data_size": 0 00:13:12.904 }, 00:13:12.904 { 00:13:12.904 "name": "BaseBdev2", 00:13:12.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.904 "is_configured": false, 00:13:12.904 "data_offset": 0, 00:13:12.904 "data_size": 0 00:13:12.904 }, 00:13:12.904 { 00:13:12.904 "name": "BaseBdev3", 00:13:12.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.904 "is_configured": false, 00:13:12.904 "data_offset": 0, 00:13:12.904 "data_size": 0 00:13:12.904 }, 00:13:12.904 { 00:13:12.904 "name": "BaseBdev4", 00:13:12.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.904 "is_configured": false, 00:13:12.904 "data_offset": 0, 00:13:12.904 "data_size": 0 00:13:12.904 } 00:13:12.904 ] 00:13:12.904 }' 00:13:12.904 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.904 02:37:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.164 02:37:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.424 [2024-07-25 02:38:00.070634] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.424 [2024-07-25 02:38:00.070653] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113662a34500 name Existed_Raid, state configuring 00:13:13.424 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:13.424 [2024-07-25 02:38:00.254666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.424 [2024-07-25 02:38:00.254697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.424 [2024-07-25 02:38:00.254716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.424 [2024-07-25 02:38:00.254721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.424 [2024-07-25 02:38:00.254724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.424 [2024-07-25 02:38:00.254729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.424 [2024-07-25 02:38:00.254731] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.424 [2024-07-25 02:38:00.254736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.424 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.683 [2024-07-25 02:38:00.443439] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.683 BaseBdev1 00:13:13.683 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:13.683 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:13.683 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:13.683 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:13.683 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:13.684 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:13.684 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:13.943 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.203 [ 00:13:14.203 { 00:13:14.203 "name": "BaseBdev1", 00:13:14.203 "aliases": [ 00:13:14.203 "e8adc09c-4a2e-11ef-9c8e-7947904e2597" 00:13:14.203 ], 00:13:14.203 "product_name": "Malloc disk", 00:13:14.203 "block_size": 512, 00:13:14.203 "num_blocks": 65536, 00:13:14.203 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:14.203 "assigned_rate_limits": { 00:13:14.203 "rw_ios_per_sec": 0, 00:13:14.203 "rw_mbytes_per_sec": 0, 00:13:14.203 "r_mbytes_per_sec": 0, 00:13:14.203 "w_mbytes_per_sec": 0 00:13:14.203 }, 00:13:14.203 "claimed": true, 00:13:14.203 "claim_type": "exclusive_write", 00:13:14.203 "zoned": false, 00:13:14.203 "supported_io_types": { 00:13:14.203 "read": true, 00:13:14.203 "write": true, 00:13:14.203 "unmap": true, 00:13:14.203 "flush": true, 00:13:14.203 "reset": true, 00:13:14.203 "nvme_admin": false, 00:13:14.203 "nvme_io": false, 00:13:14.203 "nvme_io_md": false, 00:13:14.203 "write_zeroes": true, 00:13:14.203 "zcopy": true, 00:13:14.203 "get_zone_info": false, 00:13:14.203 "zone_management": false, 00:13:14.203 "zone_append": false, 00:13:14.203 "compare": false, 00:13:14.203 "compare_and_write": false, 00:13:14.203 "abort": true, 00:13:14.203 "seek_hole": false, 00:13:14.203 "seek_data": false, 00:13:14.203 "copy": true, 00:13:14.203 "nvme_iov_md": false 00:13:14.203 }, 00:13:14.203 "memory_domains": [ 00:13:14.203 { 00:13:14.203 "dma_device_id": "system", 00:13:14.203 "dma_device_type": 1 00:13:14.203 }, 00:13:14.203 { 00:13:14.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.203 "dma_device_type": 2 00:13:14.203 } 00:13:14.203 ], 00:13:14.203 "driver_specific": {} 00:13:14.203 } 00:13:14.203 ] 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.203 02:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.203 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.203 "name": "Existed_Raid", 00:13:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.203 "strip_size_kb": 0, 00:13:14.203 "state": "configuring", 00:13:14.203 "raid_level": "raid1", 00:13:14.203 "superblock": false, 00:13:14.203 "num_base_bdevs": 4, 00:13:14.203 "num_base_bdevs_discovered": 1, 00:13:14.203 "num_base_bdevs_operational": 4, 00:13:14.203 "base_bdevs_list": [ 00:13:14.203 { 00:13:14.203 "name": "BaseBdev1", 00:13:14.203 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:14.203 "is_configured": true, 00:13:14.203 "data_offset": 0, 00:13:14.203 "data_size": 65536 00:13:14.203 }, 00:13:14.203 { 00:13:14.203 "name": "BaseBdev2", 00:13:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.203 "is_configured": false, 00:13:14.203 "data_offset": 0, 00:13:14.203 "data_size": 0 00:13:14.203 }, 00:13:14.203 { 00:13:14.203 "name": "BaseBdev3", 00:13:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.203 "is_configured": false, 00:13:14.203 "data_offset": 0, 00:13:14.203 "data_size": 0 00:13:14.203 }, 00:13:14.203 { 00:13:14.203 "name": "BaseBdev4", 00:13:14.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.203 "is_configured": false, 00:13:14.203 "data_offset": 0, 00:13:14.203 "data_size": 0 00:13:14.203 } 00:13:14.203 ] 00:13:14.203 }' 00:13:14.203 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.203 02:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.462 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:14.721 [2024-07-25 02:38:01.502807] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.721 [2024-07-25 02:38:01.502827] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113662a34500 name Existed_Raid, state configuring 00:13:14.721 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:14.981 [2024-07-25 02:38:01.682836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.981 [2024-07-25 02:38:01.683450] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.981 [2024-07-25 02:38:01.683484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.981 [2024-07-25 02:38:01.683488] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.981 [2024-07-25 02:38:01.683494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.981 [2024-07-25 02:38:01.683497] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.981 [2024-07-25 02:38:01.683502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.981 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.240 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.240 "name": "Existed_Raid", 00:13:15.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.240 "strip_size_kb": 0, 00:13:15.240 "state": "configuring", 00:13:15.240 "raid_level": "raid1", 00:13:15.240 "superblock": false, 00:13:15.240 "num_base_bdevs": 4, 00:13:15.240 "num_base_bdevs_discovered": 1, 00:13:15.240 "num_base_bdevs_operational": 4, 00:13:15.240 "base_bdevs_list": [ 00:13:15.240 { 00:13:15.240 "name": "BaseBdev1", 00:13:15.240 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:15.240 "is_configured": true, 00:13:15.240 "data_offset": 0, 00:13:15.240 "data_size": 65536 00:13:15.240 }, 00:13:15.240 { 00:13:15.240 "name": "BaseBdev2", 00:13:15.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.240 "is_configured": false, 00:13:15.240 "data_offset": 0, 00:13:15.240 "data_size": 0 00:13:15.240 }, 00:13:15.240 { 00:13:15.240 "name": "BaseBdev3", 00:13:15.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.240 "is_configured": false, 00:13:15.240 "data_offset": 0, 00:13:15.240 "data_size": 0 00:13:15.240 }, 00:13:15.240 { 00:13:15.240 "name": "BaseBdev4", 00:13:15.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.240 "is_configured": false, 00:13:15.240 "data_offset": 0, 00:13:15.240 "data_size": 0 00:13:15.240 } 00:13:15.240 ] 00:13:15.240 }' 00:13:15.240 02:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.240 02:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.499 [2024-07-25 02:38:02.327000] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.499 BaseBdev2 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:15.499 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.759 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:16.018 [ 00:13:16.018 { 00:13:16.018 "name": "BaseBdev2", 00:13:16.018 "aliases": [ 00:13:16.018 "e9cd42d3-4a2e-11ef-9c8e-7947904e2597" 00:13:16.018 ], 00:13:16.018 "product_name": "Malloc disk", 00:13:16.018 "block_size": 512, 00:13:16.018 "num_blocks": 65536, 00:13:16.018 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:16.018 "assigned_rate_limits": { 00:13:16.018 "rw_ios_per_sec": 0, 00:13:16.018 "rw_mbytes_per_sec": 0, 00:13:16.018 "r_mbytes_per_sec": 0, 00:13:16.018 "w_mbytes_per_sec": 0 00:13:16.018 }, 00:13:16.018 "claimed": true, 00:13:16.018 "claim_type": "exclusive_write", 00:13:16.018 "zoned": false, 00:13:16.018 "supported_io_types": { 00:13:16.018 "read": true, 00:13:16.018 "write": true, 00:13:16.018 "unmap": true, 00:13:16.018 "flush": true, 00:13:16.018 "reset": true, 00:13:16.018 "nvme_admin": false, 00:13:16.018 "nvme_io": false, 00:13:16.018 "nvme_io_md": false, 00:13:16.018 "write_zeroes": true, 00:13:16.018 "zcopy": true, 00:13:16.018 "get_zone_info": false, 00:13:16.018 "zone_management": false, 00:13:16.018 "zone_append": false, 00:13:16.018 "compare": false, 00:13:16.018 "compare_and_write": false, 00:13:16.018 "abort": true, 00:13:16.018 "seek_hole": false, 00:13:16.018 "seek_data": false, 00:13:16.018 "copy": true, 00:13:16.018 "nvme_iov_md": false 00:13:16.018 }, 00:13:16.018 "memory_domains": [ 00:13:16.018 { 00:13:16.018 "dma_device_id": "system", 00:13:16.018 "dma_device_type": 1 00:13:16.018 }, 00:13:16.018 { 00:13:16.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.018 "dma_device_type": 2 00:13:16.018 } 00:13:16.018 ], 00:13:16.018 "driver_specific": {} 00:13:16.018 } 00:13:16.018 ] 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.018 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.278 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.278 "name": "Existed_Raid", 00:13:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.278 "strip_size_kb": 0, 00:13:16.278 "state": "configuring", 00:13:16.278 "raid_level": "raid1", 00:13:16.278 "superblock": false, 00:13:16.278 "num_base_bdevs": 4, 00:13:16.278 "num_base_bdevs_discovered": 2, 00:13:16.278 "num_base_bdevs_operational": 4, 00:13:16.278 "base_bdevs_list": [ 00:13:16.278 { 00:13:16.278 "name": "BaseBdev1", 00:13:16.278 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:16.278 "is_configured": true, 00:13:16.278 "data_offset": 0, 00:13:16.278 "data_size": 65536 00:13:16.278 }, 00:13:16.278 { 00:13:16.278 "name": "BaseBdev2", 00:13:16.278 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:16.278 "is_configured": true, 00:13:16.278 "data_offset": 0, 00:13:16.278 "data_size": 65536 00:13:16.278 }, 00:13:16.278 { 00:13:16.278 "name": "BaseBdev3", 00:13:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.278 "is_configured": false, 00:13:16.278 "data_offset": 0, 00:13:16.278 "data_size": 0 00:13:16.278 }, 00:13:16.278 { 00:13:16.278 "name": "BaseBdev4", 00:13:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.278 "is_configured": false, 00:13:16.278 "data_offset": 0, 00:13:16.278 "data_size": 0 00:13:16.278 } 00:13:16.278 ] 00:13:16.278 }' 00:13:16.278 02:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.278 02:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.538 [2024-07-25 02:38:03.367070] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.538 BaseBdev3 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:16.538 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.797 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:17.055 [ 00:13:17.055 { 00:13:17.055 "name": "BaseBdev3", 00:13:17.055 "aliases": [ 00:13:17.055 "ea6bf79e-4a2e-11ef-9c8e-7947904e2597" 00:13:17.055 ], 00:13:17.055 "product_name": "Malloc disk", 00:13:17.055 "block_size": 512, 00:13:17.055 "num_blocks": 65536, 00:13:17.055 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:17.055 "assigned_rate_limits": { 00:13:17.055 "rw_ios_per_sec": 0, 00:13:17.055 "rw_mbytes_per_sec": 0, 00:13:17.055 "r_mbytes_per_sec": 0, 00:13:17.055 "w_mbytes_per_sec": 0 00:13:17.055 }, 00:13:17.055 "claimed": true, 00:13:17.055 "claim_type": "exclusive_write", 00:13:17.055 "zoned": false, 00:13:17.055 "supported_io_types": { 00:13:17.055 "read": true, 00:13:17.055 "write": true, 00:13:17.055 "unmap": true, 00:13:17.055 "flush": true, 00:13:17.055 "reset": true, 00:13:17.056 "nvme_admin": false, 00:13:17.056 "nvme_io": false, 00:13:17.056 "nvme_io_md": false, 00:13:17.056 "write_zeroes": true, 00:13:17.056 "zcopy": true, 00:13:17.056 "get_zone_info": false, 00:13:17.056 "zone_management": false, 00:13:17.056 "zone_append": false, 00:13:17.056 "compare": false, 00:13:17.056 "compare_and_write": false, 00:13:17.056 "abort": true, 00:13:17.056 "seek_hole": false, 00:13:17.056 "seek_data": false, 00:13:17.056 "copy": true, 00:13:17.056 "nvme_iov_md": false 00:13:17.056 }, 00:13:17.056 "memory_domains": [ 00:13:17.056 { 00:13:17.056 "dma_device_id": "system", 00:13:17.056 "dma_device_type": 1 00:13:17.056 }, 00:13:17.056 { 00:13:17.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.056 "dma_device_type": 2 00:13:17.056 } 00:13:17.056 ], 00:13:17.056 "driver_specific": {} 00:13:17.056 } 00:13:17.056 ] 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.056 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.315 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.315 "name": "Existed_Raid", 00:13:17.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.315 "strip_size_kb": 0, 00:13:17.315 "state": "configuring", 00:13:17.315 "raid_level": "raid1", 00:13:17.315 "superblock": false, 00:13:17.315 "num_base_bdevs": 4, 00:13:17.315 "num_base_bdevs_discovered": 3, 00:13:17.315 "num_base_bdevs_operational": 4, 00:13:17.315 "base_bdevs_list": [ 00:13:17.315 { 00:13:17.315 "name": "BaseBdev1", 00:13:17.315 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:17.315 "is_configured": true, 00:13:17.315 "data_offset": 0, 00:13:17.315 "data_size": 65536 00:13:17.315 }, 00:13:17.315 { 00:13:17.315 "name": "BaseBdev2", 00:13:17.315 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:17.315 "is_configured": true, 00:13:17.315 "data_offset": 0, 00:13:17.315 "data_size": 65536 00:13:17.315 }, 00:13:17.315 { 00:13:17.315 "name": "BaseBdev3", 00:13:17.315 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:17.315 "is_configured": true, 00:13:17.315 "data_offset": 0, 00:13:17.315 "data_size": 65536 00:13:17.315 }, 00:13:17.315 { 00:13:17.315 "name": "BaseBdev4", 00:13:17.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.315 "is_configured": false, 00:13:17.315 "data_offset": 0, 00:13:17.315 "data_size": 0 00:13:17.315 } 00:13:17.315 ] 00:13:17.315 }' 00:13:17.315 02:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.315 02:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:17.575 [2024-07-25 02:38:04.407157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.575 [2024-07-25 02:38:04.407177] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x113662a34a00 00:13:17.575 [2024-07-25 02:38:04.407180] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:17.575 [2024-07-25 02:38:04.407219] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x113662a97e20 00:13:17.575 [2024-07-25 02:38:04.407292] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x113662a34a00 00:13:17.575 [2024-07-25 02:38:04.407295] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x113662a34a00 00:13:17.575 [2024-07-25 02:38:04.407318] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.575 BaseBdev4 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:17.575 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.833 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:18.092 [ 00:13:18.092 { 00:13:18.092 "name": "BaseBdev4", 00:13:18.092 "aliases": [ 00:13:18.092 "eb0aabf8-4a2e-11ef-9c8e-7947904e2597" 00:13:18.092 ], 00:13:18.092 "product_name": "Malloc disk", 00:13:18.092 "block_size": 512, 00:13:18.092 "num_blocks": 65536, 00:13:18.092 "uuid": "eb0aabf8-4a2e-11ef-9c8e-7947904e2597", 00:13:18.092 "assigned_rate_limits": { 00:13:18.092 "rw_ios_per_sec": 0, 00:13:18.092 "rw_mbytes_per_sec": 0, 00:13:18.092 "r_mbytes_per_sec": 0, 00:13:18.092 "w_mbytes_per_sec": 0 00:13:18.092 }, 00:13:18.092 "claimed": true, 00:13:18.092 "claim_type": "exclusive_write", 00:13:18.092 "zoned": false, 00:13:18.092 "supported_io_types": { 00:13:18.092 "read": true, 00:13:18.092 "write": true, 00:13:18.092 "unmap": true, 00:13:18.092 "flush": true, 00:13:18.092 "reset": true, 00:13:18.092 "nvme_admin": false, 00:13:18.092 "nvme_io": false, 00:13:18.092 "nvme_io_md": false, 00:13:18.092 "write_zeroes": true, 00:13:18.092 "zcopy": true, 00:13:18.092 "get_zone_info": false, 00:13:18.092 "zone_management": false, 00:13:18.092 "zone_append": false, 00:13:18.092 "compare": false, 00:13:18.092 "compare_and_write": false, 00:13:18.092 "abort": true, 00:13:18.092 "seek_hole": false, 00:13:18.092 "seek_data": false, 00:13:18.092 "copy": true, 00:13:18.092 "nvme_iov_md": false 00:13:18.092 }, 00:13:18.092 "memory_domains": [ 00:13:18.092 { 00:13:18.093 "dma_device_id": "system", 00:13:18.093 "dma_device_type": 1 00:13:18.093 }, 00:13:18.093 { 00:13:18.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.093 "dma_device_type": 2 00:13:18.093 } 00:13:18.093 ], 00:13:18.093 "driver_specific": {} 00:13:18.093 } 00:13:18.093 ] 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.093 "name": "Existed_Raid", 00:13:18.093 "uuid": "eb0aafcd-4a2e-11ef-9c8e-7947904e2597", 00:13:18.093 "strip_size_kb": 0, 00:13:18.093 "state": "online", 00:13:18.093 "raid_level": "raid1", 00:13:18.093 "superblock": false, 00:13:18.093 "num_base_bdevs": 4, 00:13:18.093 "num_base_bdevs_discovered": 4, 00:13:18.093 "num_base_bdevs_operational": 4, 00:13:18.093 "base_bdevs_list": [ 00:13:18.093 { 00:13:18.093 "name": "BaseBdev1", 00:13:18.093 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:18.093 "is_configured": true, 00:13:18.093 "data_offset": 0, 00:13:18.093 "data_size": 65536 00:13:18.093 }, 00:13:18.093 { 00:13:18.093 "name": "BaseBdev2", 00:13:18.093 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:18.093 "is_configured": true, 00:13:18.093 "data_offset": 0, 00:13:18.093 "data_size": 65536 00:13:18.093 }, 00:13:18.093 { 00:13:18.093 "name": "BaseBdev3", 00:13:18.093 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:18.093 "is_configured": true, 00:13:18.093 "data_offset": 0, 00:13:18.093 "data_size": 65536 00:13:18.093 }, 00:13:18.093 { 00:13:18.093 "name": "BaseBdev4", 00:13:18.093 "uuid": "eb0aabf8-4a2e-11ef-9c8e-7947904e2597", 00:13:18.093 "is_configured": true, 00:13:18.093 "data_offset": 0, 00:13:18.093 "data_size": 65536 00:13:18.093 } 00:13:18.093 ] 00:13:18.093 }' 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.093 02:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:18.352 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:18.612 [2024-07-25 02:38:05.399223] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.612 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:18.612 "name": "Existed_Raid", 00:13:18.612 "aliases": [ 00:13:18.612 "eb0aafcd-4a2e-11ef-9c8e-7947904e2597" 00:13:18.612 ], 00:13:18.612 "product_name": "Raid Volume", 00:13:18.612 "block_size": 512, 00:13:18.612 "num_blocks": 65536, 00:13:18.612 "uuid": "eb0aafcd-4a2e-11ef-9c8e-7947904e2597", 00:13:18.612 "assigned_rate_limits": { 00:13:18.612 "rw_ios_per_sec": 0, 00:13:18.612 "rw_mbytes_per_sec": 0, 00:13:18.612 "r_mbytes_per_sec": 0, 00:13:18.612 "w_mbytes_per_sec": 0 00:13:18.612 }, 00:13:18.612 "claimed": false, 00:13:18.612 "zoned": false, 00:13:18.612 "supported_io_types": { 00:13:18.612 "read": true, 00:13:18.612 "write": true, 00:13:18.612 "unmap": false, 00:13:18.612 "flush": false, 00:13:18.612 "reset": true, 00:13:18.612 "nvme_admin": false, 00:13:18.612 "nvme_io": false, 00:13:18.612 "nvme_io_md": false, 00:13:18.612 "write_zeroes": true, 00:13:18.612 "zcopy": false, 00:13:18.612 "get_zone_info": false, 00:13:18.612 "zone_management": false, 00:13:18.612 "zone_append": false, 00:13:18.612 "compare": false, 00:13:18.612 "compare_and_write": false, 00:13:18.612 "abort": false, 00:13:18.612 "seek_hole": false, 00:13:18.612 "seek_data": false, 00:13:18.612 "copy": false, 00:13:18.612 "nvme_iov_md": false 00:13:18.612 }, 00:13:18.612 "memory_domains": [ 00:13:18.612 { 00:13:18.612 "dma_device_id": "system", 00:13:18.612 "dma_device_type": 1 00:13:18.612 }, 00:13:18.612 { 00:13:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.612 "dma_device_type": 2 00:13:18.612 }, 00:13:18.612 { 00:13:18.612 "dma_device_id": "system", 00:13:18.612 "dma_device_type": 1 00:13:18.612 }, 00:13:18.612 { 00:13:18.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.612 "dma_device_type": 2 00:13:18.612 }, 00:13:18.612 { 00:13:18.613 "dma_device_id": "system", 00:13:18.613 "dma_device_type": 1 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.613 "dma_device_type": 2 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "dma_device_id": "system", 00:13:18.613 "dma_device_type": 1 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.613 "dma_device_type": 2 00:13:18.613 } 00:13:18.613 ], 00:13:18.613 "driver_specific": { 00:13:18.613 "raid": { 00:13:18.613 "uuid": "eb0aafcd-4a2e-11ef-9c8e-7947904e2597", 00:13:18.613 "strip_size_kb": 0, 00:13:18.613 "state": "online", 00:13:18.613 "raid_level": "raid1", 00:13:18.613 "superblock": false, 00:13:18.613 "num_base_bdevs": 4, 00:13:18.613 "num_base_bdevs_discovered": 4, 00:13:18.613 "num_base_bdevs_operational": 4, 00:13:18.613 "base_bdevs_list": [ 00:13:18.613 { 00:13:18.613 "name": "BaseBdev1", 00:13:18.613 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:18.613 "is_configured": true, 00:13:18.613 "data_offset": 0, 00:13:18.613 "data_size": 65536 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "name": "BaseBdev2", 00:13:18.613 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:18.613 "is_configured": true, 00:13:18.613 "data_offset": 0, 00:13:18.613 "data_size": 65536 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "name": "BaseBdev3", 00:13:18.613 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:18.613 "is_configured": true, 00:13:18.613 "data_offset": 0, 00:13:18.613 "data_size": 65536 00:13:18.613 }, 00:13:18.613 { 00:13:18.613 "name": "BaseBdev4", 00:13:18.613 "uuid": "eb0aabf8-4a2e-11ef-9c8e-7947904e2597", 00:13:18.613 "is_configured": true, 00:13:18.613 "data_offset": 0, 00:13:18.613 "data_size": 65536 00:13:18.613 } 00:13:18.613 ] 00:13:18.613 } 00:13:18.613 } 00:13:18.613 }' 00:13:18.613 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.613 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:18.613 BaseBdev2 00:13:18.613 BaseBdev3 00:13:18.613 BaseBdev4' 00:13:18.613 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:18.613 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:18.613 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:18.873 "name": "BaseBdev1", 00:13:18.873 "aliases": [ 00:13:18.873 "e8adc09c-4a2e-11ef-9c8e-7947904e2597" 00:13:18.873 ], 00:13:18.873 "product_name": "Malloc disk", 00:13:18.873 "block_size": 512, 00:13:18.873 "num_blocks": 65536, 00:13:18.873 "uuid": "e8adc09c-4a2e-11ef-9c8e-7947904e2597", 00:13:18.873 "assigned_rate_limits": { 00:13:18.873 "rw_ios_per_sec": 0, 00:13:18.873 "rw_mbytes_per_sec": 0, 00:13:18.873 "r_mbytes_per_sec": 0, 00:13:18.873 "w_mbytes_per_sec": 0 00:13:18.873 }, 00:13:18.873 "claimed": true, 00:13:18.873 "claim_type": "exclusive_write", 00:13:18.873 "zoned": false, 00:13:18.873 "supported_io_types": { 00:13:18.873 "read": true, 00:13:18.873 "write": true, 00:13:18.873 "unmap": true, 00:13:18.873 "flush": true, 00:13:18.873 "reset": true, 00:13:18.873 "nvme_admin": false, 00:13:18.873 "nvme_io": false, 00:13:18.873 "nvme_io_md": false, 00:13:18.873 "write_zeroes": true, 00:13:18.873 "zcopy": true, 00:13:18.873 "get_zone_info": false, 00:13:18.873 "zone_management": false, 00:13:18.873 "zone_append": false, 00:13:18.873 "compare": false, 00:13:18.873 "compare_and_write": false, 00:13:18.873 "abort": true, 00:13:18.873 "seek_hole": false, 00:13:18.873 "seek_data": false, 00:13:18.873 "copy": true, 00:13:18.873 "nvme_iov_md": false 00:13:18.873 }, 00:13:18.873 "memory_domains": [ 00:13:18.873 { 00:13:18.873 "dma_device_id": "system", 00:13:18.873 "dma_device_type": 1 00:13:18.873 }, 00:13:18.873 { 00:13:18.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.873 "dma_device_type": 2 00:13:18.873 } 00:13:18.873 ], 00:13:18.873 "driver_specific": {} 00:13:18.873 }' 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:18.873 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.133 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.133 "name": "BaseBdev2", 00:13:19.133 "aliases": [ 00:13:19.133 "e9cd42d3-4a2e-11ef-9c8e-7947904e2597" 00:13:19.133 ], 00:13:19.133 "product_name": "Malloc disk", 00:13:19.133 "block_size": 512, 00:13:19.133 "num_blocks": 65536, 00:13:19.133 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:19.133 "assigned_rate_limits": { 00:13:19.133 "rw_ios_per_sec": 0, 00:13:19.133 "rw_mbytes_per_sec": 0, 00:13:19.133 "r_mbytes_per_sec": 0, 00:13:19.133 "w_mbytes_per_sec": 0 00:13:19.133 }, 00:13:19.133 "claimed": true, 00:13:19.133 "claim_type": "exclusive_write", 00:13:19.133 "zoned": false, 00:13:19.133 "supported_io_types": { 00:13:19.134 "read": true, 00:13:19.134 "write": true, 00:13:19.134 "unmap": true, 00:13:19.134 "flush": true, 00:13:19.134 "reset": true, 00:13:19.134 "nvme_admin": false, 00:13:19.134 "nvme_io": false, 00:13:19.134 "nvme_io_md": false, 00:13:19.134 "write_zeroes": true, 00:13:19.134 "zcopy": true, 00:13:19.134 "get_zone_info": false, 00:13:19.134 "zone_management": false, 00:13:19.134 "zone_append": false, 00:13:19.134 "compare": false, 00:13:19.134 "compare_and_write": false, 00:13:19.134 "abort": true, 00:13:19.134 "seek_hole": false, 00:13:19.134 "seek_data": false, 00:13:19.134 "copy": true, 00:13:19.134 "nvme_iov_md": false 00:13:19.134 }, 00:13:19.134 "memory_domains": [ 00:13:19.134 { 00:13:19.134 "dma_device_id": "system", 00:13:19.134 "dma_device_type": 1 00:13:19.134 }, 00:13:19.134 { 00:13:19.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.134 "dma_device_type": 2 00:13:19.134 } 00:13:19.134 ], 00:13:19.134 "driver_specific": {} 00:13:19.134 }' 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:19.134 02:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.394 "name": "BaseBdev3", 00:13:19.394 "aliases": [ 00:13:19.394 "ea6bf79e-4a2e-11ef-9c8e-7947904e2597" 00:13:19.394 ], 00:13:19.394 "product_name": "Malloc disk", 00:13:19.394 "block_size": 512, 00:13:19.394 "num_blocks": 65536, 00:13:19.394 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:19.394 "assigned_rate_limits": { 00:13:19.394 "rw_ios_per_sec": 0, 00:13:19.394 "rw_mbytes_per_sec": 0, 00:13:19.394 "r_mbytes_per_sec": 0, 00:13:19.394 "w_mbytes_per_sec": 0 00:13:19.394 }, 00:13:19.394 "claimed": true, 00:13:19.394 "claim_type": "exclusive_write", 00:13:19.394 "zoned": false, 00:13:19.394 "supported_io_types": { 00:13:19.394 "read": true, 00:13:19.394 "write": true, 00:13:19.394 "unmap": true, 00:13:19.394 "flush": true, 00:13:19.394 "reset": true, 00:13:19.394 "nvme_admin": false, 00:13:19.394 "nvme_io": false, 00:13:19.394 "nvme_io_md": false, 00:13:19.394 "write_zeroes": true, 00:13:19.394 "zcopy": true, 00:13:19.394 "get_zone_info": false, 00:13:19.394 "zone_management": false, 00:13:19.394 "zone_append": false, 00:13:19.394 "compare": false, 00:13:19.394 "compare_and_write": false, 00:13:19.394 "abort": true, 00:13:19.394 "seek_hole": false, 00:13:19.394 "seek_data": false, 00:13:19.394 "copy": true, 00:13:19.394 "nvme_iov_md": false 00:13:19.394 }, 00:13:19.394 "memory_domains": [ 00:13:19.394 { 00:13:19.394 "dma_device_id": "system", 00:13:19.394 "dma_device_type": 1 00:13:19.394 }, 00:13:19.394 { 00:13:19.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.394 "dma_device_type": 2 00:13:19.394 } 00:13:19.394 ], 00:13:19.394 "driver_specific": {} 00:13:19.394 }' 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.394 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:19.395 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:19.655 "name": "BaseBdev4", 00:13:19.655 "aliases": [ 00:13:19.655 "eb0aabf8-4a2e-11ef-9c8e-7947904e2597" 00:13:19.655 ], 00:13:19.655 "product_name": "Malloc disk", 00:13:19.655 "block_size": 512, 00:13:19.655 "num_blocks": 65536, 00:13:19.655 "uuid": "eb0aabf8-4a2e-11ef-9c8e-7947904e2597", 00:13:19.655 "assigned_rate_limits": { 00:13:19.655 "rw_ios_per_sec": 0, 00:13:19.655 "rw_mbytes_per_sec": 0, 00:13:19.655 "r_mbytes_per_sec": 0, 00:13:19.655 "w_mbytes_per_sec": 0 00:13:19.655 }, 00:13:19.655 "claimed": true, 00:13:19.655 "claim_type": "exclusive_write", 00:13:19.655 "zoned": false, 00:13:19.655 "supported_io_types": { 00:13:19.655 "read": true, 00:13:19.655 "write": true, 00:13:19.655 "unmap": true, 00:13:19.655 "flush": true, 00:13:19.655 "reset": true, 00:13:19.655 "nvme_admin": false, 00:13:19.655 "nvme_io": false, 00:13:19.655 "nvme_io_md": false, 00:13:19.655 "write_zeroes": true, 00:13:19.655 "zcopy": true, 00:13:19.655 "get_zone_info": false, 00:13:19.655 "zone_management": false, 00:13:19.655 "zone_append": false, 00:13:19.655 "compare": false, 00:13:19.655 "compare_and_write": false, 00:13:19.655 "abort": true, 00:13:19.655 "seek_hole": false, 00:13:19.655 "seek_data": false, 00:13:19.655 "copy": true, 00:13:19.655 "nvme_iov_md": false 00:13:19.655 }, 00:13:19.655 "memory_domains": [ 00:13:19.655 { 00:13:19.655 "dma_device_id": "system", 00:13:19.655 "dma_device_type": 1 00:13:19.655 }, 00:13:19.655 { 00:13:19.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.655 "dma_device_type": 2 00:13:19.655 } 00:13:19.655 ], 00:13:19.655 "driver_specific": {} 00:13:19.655 }' 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:19.655 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:19.656 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:19.916 [2024-07-25 02:38:06.723305] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.916 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.176 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.176 "name": "Existed_Raid", 00:13:20.176 "uuid": "eb0aafcd-4a2e-11ef-9c8e-7947904e2597", 00:13:20.176 "strip_size_kb": 0, 00:13:20.176 "state": "online", 00:13:20.176 "raid_level": "raid1", 00:13:20.176 "superblock": false, 00:13:20.176 "num_base_bdevs": 4, 00:13:20.176 "num_base_bdevs_discovered": 3, 00:13:20.176 "num_base_bdevs_operational": 3, 00:13:20.176 "base_bdevs_list": [ 00:13:20.176 { 00:13:20.176 "name": null, 00:13:20.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.176 "is_configured": false, 00:13:20.176 "data_offset": 0, 00:13:20.176 "data_size": 65536 00:13:20.176 }, 00:13:20.176 { 00:13:20.176 "name": "BaseBdev2", 00:13:20.176 "uuid": "e9cd42d3-4a2e-11ef-9c8e-7947904e2597", 00:13:20.176 "is_configured": true, 00:13:20.176 "data_offset": 0, 00:13:20.176 "data_size": 65536 00:13:20.176 }, 00:13:20.176 { 00:13:20.176 "name": "BaseBdev3", 00:13:20.176 "uuid": "ea6bf79e-4a2e-11ef-9c8e-7947904e2597", 00:13:20.176 "is_configured": true, 00:13:20.176 "data_offset": 0, 00:13:20.176 "data_size": 65536 00:13:20.176 }, 00:13:20.176 { 00:13:20.176 "name": "BaseBdev4", 00:13:20.176 "uuid": "eb0aabf8-4a2e-11ef-9c8e-7947904e2597", 00:13:20.176 "is_configured": true, 00:13:20.176 "data_offset": 0, 00:13:20.176 "data_size": 65536 00:13:20.176 } 00:13:20.176 ] 00:13:20.176 }' 00:13:20.176 02:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.176 02:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.436 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:20.436 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:20.436 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.436 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:20.695 [2024-07-25 02:38:07.560502] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.695 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:20.955 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:20.955 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:20.955 02:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:21.215 [2024-07-25 02:38:07.965664] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:21.215 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:21.215 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:21.215 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.215 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:21.476 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:21.476 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.476 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:21.476 [2024-07-25 02:38:08.346849] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:21.476 [2024-07-25 02:38:08.346879] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.476 [2024-07-25 02:38:08.356120] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.476 [2024-07-25 02:38:08.356137] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.476 [2024-07-25 02:38:08.356140] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113662a34a00 name Existed_Raid, state offline 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:21.736 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.996 BaseBdev2 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:21.996 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.256 02:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.256 [ 00:13:22.256 { 00:13:22.256 "name": "BaseBdev2", 00:13:22.256 "aliases": [ 00:13:22.256 "ed9cc99c-4a2e-11ef-9c8e-7947904e2597" 00:13:22.256 ], 00:13:22.256 "product_name": "Malloc disk", 00:13:22.256 "block_size": 512, 00:13:22.256 "num_blocks": 65536, 00:13:22.256 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:22.256 "assigned_rate_limits": { 00:13:22.256 "rw_ios_per_sec": 0, 00:13:22.256 "rw_mbytes_per_sec": 0, 00:13:22.256 "r_mbytes_per_sec": 0, 00:13:22.256 "w_mbytes_per_sec": 0 00:13:22.256 }, 00:13:22.256 "claimed": false, 00:13:22.256 "zoned": false, 00:13:22.256 "supported_io_types": { 00:13:22.256 "read": true, 00:13:22.256 "write": true, 00:13:22.256 "unmap": true, 00:13:22.256 "flush": true, 00:13:22.256 "reset": true, 00:13:22.256 "nvme_admin": false, 00:13:22.256 "nvme_io": false, 00:13:22.256 "nvme_io_md": false, 00:13:22.256 "write_zeroes": true, 00:13:22.256 "zcopy": true, 00:13:22.256 "get_zone_info": false, 00:13:22.256 "zone_management": false, 00:13:22.256 "zone_append": false, 00:13:22.256 "compare": false, 00:13:22.256 "compare_and_write": false, 00:13:22.256 "abort": true, 00:13:22.256 "seek_hole": false, 00:13:22.256 "seek_data": false, 00:13:22.256 "copy": true, 00:13:22.257 "nvme_iov_md": false 00:13:22.257 }, 00:13:22.257 "memory_domains": [ 00:13:22.257 { 00:13:22.257 "dma_device_id": "system", 00:13:22.257 "dma_device_type": 1 00:13:22.257 }, 00:13:22.257 { 00:13:22.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.257 "dma_device_type": 2 00:13:22.257 } 00:13:22.257 ], 00:13:22.257 "driver_specific": {} 00:13:22.257 } 00:13:22.257 ] 00:13:22.257 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:22.257 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:22.257 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:22.257 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:22.516 BaseBdev3 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:22.516 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.776 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.776 [ 00:13:22.776 { 00:13:22.776 "name": "BaseBdev3", 00:13:22.776 "aliases": [ 00:13:22.776 "edf1a1e7-4a2e-11ef-9c8e-7947904e2597" 00:13:22.776 ], 00:13:22.776 "product_name": "Malloc disk", 00:13:22.776 "block_size": 512, 00:13:22.776 "num_blocks": 65536, 00:13:22.776 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:22.776 "assigned_rate_limits": { 00:13:22.776 "rw_ios_per_sec": 0, 00:13:22.776 "rw_mbytes_per_sec": 0, 00:13:22.776 "r_mbytes_per_sec": 0, 00:13:22.776 "w_mbytes_per_sec": 0 00:13:22.776 }, 00:13:22.776 "claimed": false, 00:13:22.776 "zoned": false, 00:13:22.776 "supported_io_types": { 00:13:22.776 "read": true, 00:13:22.776 "write": true, 00:13:22.776 "unmap": true, 00:13:22.776 "flush": true, 00:13:22.776 "reset": true, 00:13:22.776 "nvme_admin": false, 00:13:22.776 "nvme_io": false, 00:13:22.776 "nvme_io_md": false, 00:13:22.776 "write_zeroes": true, 00:13:22.776 "zcopy": true, 00:13:22.777 "get_zone_info": false, 00:13:22.777 "zone_management": false, 00:13:22.777 "zone_append": false, 00:13:22.777 "compare": false, 00:13:22.777 "compare_and_write": false, 00:13:22.777 "abort": true, 00:13:22.777 "seek_hole": false, 00:13:22.777 "seek_data": false, 00:13:22.777 "copy": true, 00:13:22.777 "nvme_iov_md": false 00:13:22.777 }, 00:13:22.777 "memory_domains": [ 00:13:22.777 { 00:13:22.777 "dma_device_id": "system", 00:13:22.777 "dma_device_type": 1 00:13:22.777 }, 00:13:22.777 { 00:13:22.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.777 "dma_device_type": 2 00:13:22.777 } 00:13:22.777 ], 00:13:22.777 "driver_specific": {} 00:13:22.777 } 00:13:22.777 ] 00:13:22.777 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:22.777 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:22.777 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:22.777 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:23.037 BaseBdev4 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:23.037 02:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:23.296 02:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.556 [ 00:13:23.556 { 00:13:23.556 "name": "BaseBdev4", 00:13:23.556 "aliases": [ 00:13:23.556 "ee44a596-4a2e-11ef-9c8e-7947904e2597" 00:13:23.556 ], 00:13:23.556 "product_name": "Malloc disk", 00:13:23.556 "block_size": 512, 00:13:23.556 "num_blocks": 65536, 00:13:23.556 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:23.556 "assigned_rate_limits": { 00:13:23.556 "rw_ios_per_sec": 0, 00:13:23.556 "rw_mbytes_per_sec": 0, 00:13:23.556 "r_mbytes_per_sec": 0, 00:13:23.556 "w_mbytes_per_sec": 0 00:13:23.556 }, 00:13:23.556 "claimed": false, 00:13:23.556 "zoned": false, 00:13:23.556 "supported_io_types": { 00:13:23.556 "read": true, 00:13:23.556 "write": true, 00:13:23.556 "unmap": true, 00:13:23.556 "flush": true, 00:13:23.556 "reset": true, 00:13:23.556 "nvme_admin": false, 00:13:23.557 "nvme_io": false, 00:13:23.557 "nvme_io_md": false, 00:13:23.557 "write_zeroes": true, 00:13:23.557 "zcopy": true, 00:13:23.557 "get_zone_info": false, 00:13:23.557 "zone_management": false, 00:13:23.557 "zone_append": false, 00:13:23.557 "compare": false, 00:13:23.557 "compare_and_write": false, 00:13:23.557 "abort": true, 00:13:23.557 "seek_hole": false, 00:13:23.557 "seek_data": false, 00:13:23.557 "copy": true, 00:13:23.557 "nvme_iov_md": false 00:13:23.557 }, 00:13:23.557 "memory_domains": [ 00:13:23.557 { 00:13:23.557 "dma_device_id": "system", 00:13:23.557 "dma_device_type": 1 00:13:23.557 }, 00:13:23.557 { 00:13:23.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.557 "dma_device_type": 2 00:13:23.557 } 00:13:23.557 ], 00:13:23.557 "driver_specific": {} 00:13:23.557 } 00:13:23.557 ] 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:23.557 [2024-07-25 02:38:10.400259] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.557 [2024-07-25 02:38:10.400299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.557 [2024-07-25 02:38:10.400304] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.557 [2024-07-25 02:38:10.400561] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.557 [2024-07-25 02:38:10.400574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.557 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.817 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.817 "name": "Existed_Raid", 00:13:23.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.817 "strip_size_kb": 0, 00:13:23.817 "state": "configuring", 00:13:23.817 "raid_level": "raid1", 00:13:23.817 "superblock": false, 00:13:23.817 "num_base_bdevs": 4, 00:13:23.817 "num_base_bdevs_discovered": 3, 00:13:23.817 "num_base_bdevs_operational": 4, 00:13:23.817 "base_bdevs_list": [ 00:13:23.817 { 00:13:23.817 "name": "BaseBdev1", 00:13:23.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.817 "is_configured": false, 00:13:23.817 "data_offset": 0, 00:13:23.817 "data_size": 0 00:13:23.817 }, 00:13:23.817 { 00:13:23.817 "name": "BaseBdev2", 00:13:23.817 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:23.817 "is_configured": true, 00:13:23.817 "data_offset": 0, 00:13:23.817 "data_size": 65536 00:13:23.817 }, 00:13:23.817 { 00:13:23.817 "name": "BaseBdev3", 00:13:23.817 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:23.817 "is_configured": true, 00:13:23.817 "data_offset": 0, 00:13:23.817 "data_size": 65536 00:13:23.817 }, 00:13:23.817 { 00:13:23.817 "name": "BaseBdev4", 00:13:23.817 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:23.817 "is_configured": true, 00:13:23.817 "data_offset": 0, 00:13:23.817 "data_size": 65536 00:13:23.817 } 00:13:23.817 ] 00:13:23.817 }' 00:13:23.817 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.817 02:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.077 02:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:24.338 [2024-07-25 02:38:11.052304] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.338 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.598 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.598 "name": "Existed_Raid", 00:13:24.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.598 "strip_size_kb": 0, 00:13:24.598 "state": "configuring", 00:13:24.598 "raid_level": "raid1", 00:13:24.598 "superblock": false, 00:13:24.598 "num_base_bdevs": 4, 00:13:24.598 "num_base_bdevs_discovered": 2, 00:13:24.598 "num_base_bdevs_operational": 4, 00:13:24.598 "base_bdevs_list": [ 00:13:24.598 { 00:13:24.598 "name": "BaseBdev1", 00:13:24.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.598 "is_configured": false, 00:13:24.598 "data_offset": 0, 00:13:24.598 "data_size": 0 00:13:24.598 }, 00:13:24.598 { 00:13:24.598 "name": null, 00:13:24.598 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:24.598 "is_configured": false, 00:13:24.598 "data_offset": 0, 00:13:24.598 "data_size": 65536 00:13:24.598 }, 00:13:24.598 { 00:13:24.598 "name": "BaseBdev3", 00:13:24.598 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:24.598 "is_configured": true, 00:13:24.598 "data_offset": 0, 00:13:24.598 "data_size": 65536 00:13:24.598 }, 00:13:24.598 { 00:13:24.598 "name": "BaseBdev4", 00:13:24.598 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:24.598 "is_configured": true, 00:13:24.598 "data_offset": 0, 00:13:24.598 "data_size": 65536 00:13:24.598 } 00:13:24.598 ] 00:13:24.598 }' 00:13:24.598 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.598 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.857 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.858 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.858 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:24.858 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.117 [2024-07-25 02:38:11.860488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.117 BaseBdev1 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:25.117 02:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.377 02:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.377 [ 00:13:25.377 { 00:13:25.377 "name": "BaseBdev1", 00:13:25.377 "aliases": [ 00:13:25.377 "ef7bf455-4a2e-11ef-9c8e-7947904e2597" 00:13:25.377 ], 00:13:25.378 "product_name": "Malloc disk", 00:13:25.378 "block_size": 512, 00:13:25.378 "num_blocks": 65536, 00:13:25.378 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:25.378 "assigned_rate_limits": { 00:13:25.378 "rw_ios_per_sec": 0, 00:13:25.378 "rw_mbytes_per_sec": 0, 00:13:25.378 "r_mbytes_per_sec": 0, 00:13:25.378 "w_mbytes_per_sec": 0 00:13:25.378 }, 00:13:25.378 "claimed": true, 00:13:25.378 "claim_type": "exclusive_write", 00:13:25.378 "zoned": false, 00:13:25.378 "supported_io_types": { 00:13:25.378 "read": true, 00:13:25.378 "write": true, 00:13:25.378 "unmap": true, 00:13:25.378 "flush": true, 00:13:25.378 "reset": true, 00:13:25.378 "nvme_admin": false, 00:13:25.378 "nvme_io": false, 00:13:25.378 "nvme_io_md": false, 00:13:25.378 "write_zeroes": true, 00:13:25.378 "zcopy": true, 00:13:25.378 "get_zone_info": false, 00:13:25.378 "zone_management": false, 00:13:25.378 "zone_append": false, 00:13:25.378 "compare": false, 00:13:25.378 "compare_and_write": false, 00:13:25.378 "abort": true, 00:13:25.378 "seek_hole": false, 00:13:25.378 "seek_data": false, 00:13:25.378 "copy": true, 00:13:25.378 "nvme_iov_md": false 00:13:25.378 }, 00:13:25.378 "memory_domains": [ 00:13:25.378 { 00:13:25.378 "dma_device_id": "system", 00:13:25.378 "dma_device_type": 1 00:13:25.378 }, 00:13:25.378 { 00:13:25.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.378 "dma_device_type": 2 00:13:25.378 } 00:13:25.378 ], 00:13:25.378 "driver_specific": {} 00:13:25.378 } 00:13:25.378 ] 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.378 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.638 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.638 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.638 "name": "Existed_Raid", 00:13:25.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.638 "strip_size_kb": 0, 00:13:25.638 "state": "configuring", 00:13:25.638 "raid_level": "raid1", 00:13:25.638 "superblock": false, 00:13:25.638 "num_base_bdevs": 4, 00:13:25.638 "num_base_bdevs_discovered": 3, 00:13:25.638 "num_base_bdevs_operational": 4, 00:13:25.638 "base_bdevs_list": [ 00:13:25.638 { 00:13:25.638 "name": "BaseBdev1", 00:13:25.638 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:25.638 "is_configured": true, 00:13:25.638 "data_offset": 0, 00:13:25.638 "data_size": 65536 00:13:25.638 }, 00:13:25.638 { 00:13:25.638 "name": null, 00:13:25.638 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:25.638 "is_configured": false, 00:13:25.638 "data_offset": 0, 00:13:25.638 "data_size": 65536 00:13:25.638 }, 00:13:25.638 { 00:13:25.638 "name": "BaseBdev3", 00:13:25.638 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:25.638 "is_configured": true, 00:13:25.638 "data_offset": 0, 00:13:25.638 "data_size": 65536 00:13:25.638 }, 00:13:25.638 { 00:13:25.638 "name": "BaseBdev4", 00:13:25.638 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:25.638 "is_configured": true, 00:13:25.638 "data_offset": 0, 00:13:25.638 "data_size": 65536 00:13:25.638 } 00:13:25.638 ] 00:13:25.638 }' 00:13:25.638 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.638 02:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.898 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.898 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.158 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:26.158 02:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:26.418 [2024-07-25 02:38:13.084486] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.418 "name": "Existed_Raid", 00:13:26.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.418 "strip_size_kb": 0, 00:13:26.418 "state": "configuring", 00:13:26.418 "raid_level": "raid1", 00:13:26.418 "superblock": false, 00:13:26.418 "num_base_bdevs": 4, 00:13:26.418 "num_base_bdevs_discovered": 2, 00:13:26.418 "num_base_bdevs_operational": 4, 00:13:26.418 "base_bdevs_list": [ 00:13:26.418 { 00:13:26.418 "name": "BaseBdev1", 00:13:26.418 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:26.418 "is_configured": true, 00:13:26.418 "data_offset": 0, 00:13:26.418 "data_size": 65536 00:13:26.418 }, 00:13:26.418 { 00:13:26.418 "name": null, 00:13:26.418 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:26.418 "is_configured": false, 00:13:26.418 "data_offset": 0, 00:13:26.418 "data_size": 65536 00:13:26.418 }, 00:13:26.418 { 00:13:26.418 "name": null, 00:13:26.418 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:26.418 "is_configured": false, 00:13:26.418 "data_offset": 0, 00:13:26.418 "data_size": 65536 00:13:26.418 }, 00:13:26.418 { 00:13:26.418 "name": "BaseBdev4", 00:13:26.418 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:26.418 "is_configured": true, 00:13:26.418 "data_offset": 0, 00:13:26.418 "data_size": 65536 00:13:26.418 } 00:13:26.418 ] 00:13:26.418 }' 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.418 02:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.679 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.679 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:26.939 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:26.939 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:27.198 [2024-07-25 02:38:13.932565] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.198 02:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.458 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.458 "name": "Existed_Raid", 00:13:27.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.458 "strip_size_kb": 0, 00:13:27.458 "state": "configuring", 00:13:27.458 "raid_level": "raid1", 00:13:27.458 "superblock": false, 00:13:27.458 "num_base_bdevs": 4, 00:13:27.458 "num_base_bdevs_discovered": 3, 00:13:27.458 "num_base_bdevs_operational": 4, 00:13:27.458 "base_bdevs_list": [ 00:13:27.458 { 00:13:27.458 "name": "BaseBdev1", 00:13:27.458 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:27.458 "is_configured": true, 00:13:27.458 "data_offset": 0, 00:13:27.458 "data_size": 65536 00:13:27.458 }, 00:13:27.458 { 00:13:27.458 "name": null, 00:13:27.458 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:27.458 "is_configured": false, 00:13:27.458 "data_offset": 0, 00:13:27.458 "data_size": 65536 00:13:27.458 }, 00:13:27.458 { 00:13:27.458 "name": "BaseBdev3", 00:13:27.458 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:27.458 "is_configured": true, 00:13:27.458 "data_offset": 0, 00:13:27.458 "data_size": 65536 00:13:27.458 }, 00:13:27.458 { 00:13:27.458 "name": "BaseBdev4", 00:13:27.458 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:27.458 "is_configured": true, 00:13:27.458 "data_offset": 0, 00:13:27.458 "data_size": 65536 00:13:27.458 } 00:13:27.458 ] 00:13:27.458 }' 00:13:27.458 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.458 02:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.717 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.717 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.717 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:27.717 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:27.976 [2024-07-25 02:38:14.764640] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.976 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.977 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.977 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.977 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.977 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.236 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.236 "name": "Existed_Raid", 00:13:28.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.236 "strip_size_kb": 0, 00:13:28.236 "state": "configuring", 00:13:28.236 "raid_level": "raid1", 00:13:28.236 "superblock": false, 00:13:28.236 "num_base_bdevs": 4, 00:13:28.236 "num_base_bdevs_discovered": 2, 00:13:28.236 "num_base_bdevs_operational": 4, 00:13:28.236 "base_bdevs_list": [ 00:13:28.236 { 00:13:28.236 "name": null, 00:13:28.236 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:28.236 "is_configured": false, 00:13:28.236 "data_offset": 0, 00:13:28.236 "data_size": 65536 00:13:28.236 }, 00:13:28.236 { 00:13:28.236 "name": null, 00:13:28.236 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:28.236 "is_configured": false, 00:13:28.236 "data_offset": 0, 00:13:28.236 "data_size": 65536 00:13:28.236 }, 00:13:28.236 { 00:13:28.236 "name": "BaseBdev3", 00:13:28.236 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:28.236 "is_configured": true, 00:13:28.236 "data_offset": 0, 00:13:28.236 "data_size": 65536 00:13:28.236 }, 00:13:28.236 { 00:13:28.236 "name": "BaseBdev4", 00:13:28.236 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:28.236 "is_configured": true, 00:13:28.236 "data_offset": 0, 00:13:28.236 "data_size": 65536 00:13:28.236 } 00:13:28.236 ] 00:13:28.236 }' 00:13:28.236 02:38:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.236 02:38:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.495 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.495 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:28.755 [2024-07-25 02:38:15.617800] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.755 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.015 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.015 "name": "Existed_Raid", 00:13:29.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.015 "strip_size_kb": 0, 00:13:29.015 "state": "configuring", 00:13:29.015 "raid_level": "raid1", 00:13:29.015 "superblock": false, 00:13:29.015 "num_base_bdevs": 4, 00:13:29.015 "num_base_bdevs_discovered": 3, 00:13:29.015 "num_base_bdevs_operational": 4, 00:13:29.015 "base_bdevs_list": [ 00:13:29.015 { 00:13:29.015 "name": null, 00:13:29.015 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:29.015 "is_configured": false, 00:13:29.015 "data_offset": 0, 00:13:29.015 "data_size": 65536 00:13:29.015 }, 00:13:29.015 { 00:13:29.015 "name": "BaseBdev2", 00:13:29.015 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:29.015 "is_configured": true, 00:13:29.015 "data_offset": 0, 00:13:29.015 "data_size": 65536 00:13:29.015 }, 00:13:29.015 { 00:13:29.015 "name": "BaseBdev3", 00:13:29.015 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:29.015 "is_configured": true, 00:13:29.015 "data_offset": 0, 00:13:29.015 "data_size": 65536 00:13:29.015 }, 00:13:29.015 { 00:13:29.015 "name": "BaseBdev4", 00:13:29.015 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:29.015 "is_configured": true, 00:13:29.015 "data_offset": 0, 00:13:29.015 "data_size": 65536 00:13:29.015 } 00:13:29.015 ] 00:13:29.015 }' 00:13:29.015 02:38:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.015 02:38:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.275 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.275 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.535 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:29.536 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.536 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.795 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ef7bf455-4a2e-11ef-9c8e-7947904e2597 00:13:29.795 [2024-07-25 02:38:16.617996] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.795 [2024-07-25 02:38:16.618013] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x113662a34f00 00:13:29.795 [2024-07-25 02:38:16.618016] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.795 [2024-07-25 02:38:16.618034] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x113662a97e20 00:13:29.795 [2024-07-25 02:38:16.618104] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x113662a34f00 00:13:29.795 [2024-07-25 02:38:16.618107] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x113662a34f00 00:13:29.795 [2024-07-25 02:38:16.618133] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.795 NewBaseBdev 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:29.796 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.055 02:38:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:30.316 [ 00:13:30.316 { 00:13:30.316 "name": "NewBaseBdev", 00:13:30.316 "aliases": [ 00:13:30.316 "ef7bf455-4a2e-11ef-9c8e-7947904e2597" 00:13:30.316 ], 00:13:30.316 "product_name": "Malloc disk", 00:13:30.316 "block_size": 512, 00:13:30.316 "num_blocks": 65536, 00:13:30.316 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:30.316 "assigned_rate_limits": { 00:13:30.316 "rw_ios_per_sec": 0, 00:13:30.316 "rw_mbytes_per_sec": 0, 00:13:30.316 "r_mbytes_per_sec": 0, 00:13:30.316 "w_mbytes_per_sec": 0 00:13:30.316 }, 00:13:30.316 "claimed": true, 00:13:30.316 "claim_type": "exclusive_write", 00:13:30.316 "zoned": false, 00:13:30.316 "supported_io_types": { 00:13:30.316 "read": true, 00:13:30.316 "write": true, 00:13:30.316 "unmap": true, 00:13:30.316 "flush": true, 00:13:30.316 "reset": true, 00:13:30.316 "nvme_admin": false, 00:13:30.316 "nvme_io": false, 00:13:30.316 "nvme_io_md": false, 00:13:30.316 "write_zeroes": true, 00:13:30.316 "zcopy": true, 00:13:30.316 "get_zone_info": false, 00:13:30.316 "zone_management": false, 00:13:30.316 "zone_append": false, 00:13:30.316 "compare": false, 00:13:30.316 "compare_and_write": false, 00:13:30.316 "abort": true, 00:13:30.316 "seek_hole": false, 00:13:30.316 "seek_data": false, 00:13:30.316 "copy": true, 00:13:30.316 "nvme_iov_md": false 00:13:30.316 }, 00:13:30.316 "memory_domains": [ 00:13:30.316 { 00:13:30.316 "dma_device_id": "system", 00:13:30.316 "dma_device_type": 1 00:13:30.316 }, 00:13:30.316 { 00:13:30.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.316 "dma_device_type": 2 00:13:30.316 } 00:13:30.316 ], 00:13:30.316 "driver_specific": {} 00:13:30.316 } 00:13:30.316 ] 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.316 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.577 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.577 "name": "Existed_Raid", 00:13:30.577 "uuid": "f251e96e-4a2e-11ef-9c8e-7947904e2597", 00:13:30.577 "strip_size_kb": 0, 00:13:30.577 "state": "online", 00:13:30.577 "raid_level": "raid1", 00:13:30.577 "superblock": false, 00:13:30.577 "num_base_bdevs": 4, 00:13:30.577 "num_base_bdevs_discovered": 4, 00:13:30.577 "num_base_bdevs_operational": 4, 00:13:30.577 "base_bdevs_list": [ 00:13:30.577 { 00:13:30.577 "name": "NewBaseBdev", 00:13:30.577 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:30.577 "is_configured": true, 00:13:30.577 "data_offset": 0, 00:13:30.577 "data_size": 65536 00:13:30.577 }, 00:13:30.577 { 00:13:30.577 "name": "BaseBdev2", 00:13:30.577 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:30.577 "is_configured": true, 00:13:30.577 "data_offset": 0, 00:13:30.577 "data_size": 65536 00:13:30.577 }, 00:13:30.577 { 00:13:30.577 "name": "BaseBdev3", 00:13:30.577 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:30.577 "is_configured": true, 00:13:30.577 "data_offset": 0, 00:13:30.577 "data_size": 65536 00:13:30.577 }, 00:13:30.577 { 00:13:30.577 "name": "BaseBdev4", 00:13:30.577 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:30.577 "is_configured": true, 00:13:30.577 "data_offset": 0, 00:13:30.577 "data_size": 65536 00:13:30.577 } 00:13:30.577 ] 00:13:30.577 }' 00:13:30.577 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.577 02:38:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:30.837 [2024-07-25 02:38:17.670000] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.837 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:30.837 "name": "Existed_Raid", 00:13:30.837 "aliases": [ 00:13:30.837 "f251e96e-4a2e-11ef-9c8e-7947904e2597" 00:13:30.837 ], 00:13:30.837 "product_name": "Raid Volume", 00:13:30.837 "block_size": 512, 00:13:30.837 "num_blocks": 65536, 00:13:30.837 "uuid": "f251e96e-4a2e-11ef-9c8e-7947904e2597", 00:13:30.837 "assigned_rate_limits": { 00:13:30.837 "rw_ios_per_sec": 0, 00:13:30.837 "rw_mbytes_per_sec": 0, 00:13:30.837 "r_mbytes_per_sec": 0, 00:13:30.837 "w_mbytes_per_sec": 0 00:13:30.837 }, 00:13:30.837 "claimed": false, 00:13:30.837 "zoned": false, 00:13:30.837 "supported_io_types": { 00:13:30.837 "read": true, 00:13:30.837 "write": true, 00:13:30.837 "unmap": false, 00:13:30.837 "flush": false, 00:13:30.837 "reset": true, 00:13:30.837 "nvme_admin": false, 00:13:30.837 "nvme_io": false, 00:13:30.837 "nvme_io_md": false, 00:13:30.837 "write_zeroes": true, 00:13:30.837 "zcopy": false, 00:13:30.837 "get_zone_info": false, 00:13:30.837 "zone_management": false, 00:13:30.837 "zone_append": false, 00:13:30.837 "compare": false, 00:13:30.837 "compare_and_write": false, 00:13:30.837 "abort": false, 00:13:30.837 "seek_hole": false, 00:13:30.838 "seek_data": false, 00:13:30.838 "copy": false, 00:13:30.838 "nvme_iov_md": false 00:13:30.838 }, 00:13:30.838 "memory_domains": [ 00:13:30.838 { 00:13:30.838 "dma_device_id": "system", 00:13:30.838 "dma_device_type": 1 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.838 "dma_device_type": 2 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "system", 00:13:30.838 "dma_device_type": 1 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.838 "dma_device_type": 2 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "system", 00:13:30.838 "dma_device_type": 1 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.838 "dma_device_type": 2 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "system", 00:13:30.838 "dma_device_type": 1 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.838 "dma_device_type": 2 00:13:30.838 } 00:13:30.838 ], 00:13:30.838 "driver_specific": { 00:13:30.838 "raid": { 00:13:30.838 "uuid": "f251e96e-4a2e-11ef-9c8e-7947904e2597", 00:13:30.838 "strip_size_kb": 0, 00:13:30.838 "state": "online", 00:13:30.838 "raid_level": "raid1", 00:13:30.838 "superblock": false, 00:13:30.838 "num_base_bdevs": 4, 00:13:30.838 "num_base_bdevs_discovered": 4, 00:13:30.838 "num_base_bdevs_operational": 4, 00:13:30.838 "base_bdevs_list": [ 00:13:30.838 { 00:13:30.838 "name": "NewBaseBdev", 00:13:30.838 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:30.838 "is_configured": true, 00:13:30.838 "data_offset": 0, 00:13:30.838 "data_size": 65536 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "name": "BaseBdev2", 00:13:30.838 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:30.838 "is_configured": true, 00:13:30.838 "data_offset": 0, 00:13:30.838 "data_size": 65536 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "name": "BaseBdev3", 00:13:30.838 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:30.838 "is_configured": true, 00:13:30.838 "data_offset": 0, 00:13:30.838 "data_size": 65536 00:13:30.838 }, 00:13:30.838 { 00:13:30.838 "name": "BaseBdev4", 00:13:30.838 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:30.838 "is_configured": true, 00:13:30.838 "data_offset": 0, 00:13:30.838 "data_size": 65536 00:13:30.838 } 00:13:30.838 ] 00:13:30.838 } 00:13:30.838 } 00:13:30.838 }' 00:13:30.838 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.838 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:30.838 BaseBdev2 00:13:30.838 BaseBdev3 00:13:30.838 BaseBdev4' 00:13:30.838 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:30.838 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:30.838 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.098 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.098 "name": "NewBaseBdev", 00:13:31.098 "aliases": [ 00:13:31.098 "ef7bf455-4a2e-11ef-9c8e-7947904e2597" 00:13:31.098 ], 00:13:31.098 "product_name": "Malloc disk", 00:13:31.098 "block_size": 512, 00:13:31.098 "num_blocks": 65536, 00:13:31.098 "uuid": "ef7bf455-4a2e-11ef-9c8e-7947904e2597", 00:13:31.098 "assigned_rate_limits": { 00:13:31.098 "rw_ios_per_sec": 0, 00:13:31.098 "rw_mbytes_per_sec": 0, 00:13:31.098 "r_mbytes_per_sec": 0, 00:13:31.098 "w_mbytes_per_sec": 0 00:13:31.098 }, 00:13:31.098 "claimed": true, 00:13:31.098 "claim_type": "exclusive_write", 00:13:31.098 "zoned": false, 00:13:31.098 "supported_io_types": { 00:13:31.098 "read": true, 00:13:31.098 "write": true, 00:13:31.098 "unmap": true, 00:13:31.098 "flush": true, 00:13:31.098 "reset": true, 00:13:31.098 "nvme_admin": false, 00:13:31.098 "nvme_io": false, 00:13:31.098 "nvme_io_md": false, 00:13:31.098 "write_zeroes": true, 00:13:31.098 "zcopy": true, 00:13:31.098 "get_zone_info": false, 00:13:31.098 "zone_management": false, 00:13:31.098 "zone_append": false, 00:13:31.099 "compare": false, 00:13:31.099 "compare_and_write": false, 00:13:31.099 "abort": true, 00:13:31.099 "seek_hole": false, 00:13:31.099 "seek_data": false, 00:13:31.099 "copy": true, 00:13:31.099 "nvme_iov_md": false 00:13:31.099 }, 00:13:31.099 "memory_domains": [ 00:13:31.099 { 00:13:31.099 "dma_device_id": "system", 00:13:31.099 "dma_device_type": 1 00:13:31.099 }, 00:13:31.099 { 00:13:31.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.099 "dma_device_type": 2 00:13:31.099 } 00:13:31.099 ], 00:13:31.099 "driver_specific": {} 00:13:31.099 }' 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:31.099 02:38:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.359 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.359 "name": "BaseBdev2", 00:13:31.359 "aliases": [ 00:13:31.359 "ed9cc99c-4a2e-11ef-9c8e-7947904e2597" 00:13:31.359 ], 00:13:31.359 "product_name": "Malloc disk", 00:13:31.359 "block_size": 512, 00:13:31.359 "num_blocks": 65536, 00:13:31.359 "uuid": "ed9cc99c-4a2e-11ef-9c8e-7947904e2597", 00:13:31.359 "assigned_rate_limits": { 00:13:31.359 "rw_ios_per_sec": 0, 00:13:31.359 "rw_mbytes_per_sec": 0, 00:13:31.359 "r_mbytes_per_sec": 0, 00:13:31.359 "w_mbytes_per_sec": 0 00:13:31.359 }, 00:13:31.359 "claimed": true, 00:13:31.359 "claim_type": "exclusive_write", 00:13:31.359 "zoned": false, 00:13:31.359 "supported_io_types": { 00:13:31.359 "read": true, 00:13:31.359 "write": true, 00:13:31.359 "unmap": true, 00:13:31.359 "flush": true, 00:13:31.359 "reset": true, 00:13:31.359 "nvme_admin": false, 00:13:31.359 "nvme_io": false, 00:13:31.359 "nvme_io_md": false, 00:13:31.359 "write_zeroes": true, 00:13:31.359 "zcopy": true, 00:13:31.359 "get_zone_info": false, 00:13:31.360 "zone_management": false, 00:13:31.360 "zone_append": false, 00:13:31.360 "compare": false, 00:13:31.360 "compare_and_write": false, 00:13:31.360 "abort": true, 00:13:31.360 "seek_hole": false, 00:13:31.360 "seek_data": false, 00:13:31.360 "copy": true, 00:13:31.360 "nvme_iov_md": false 00:13:31.360 }, 00:13:31.360 "memory_domains": [ 00:13:31.360 { 00:13:31.360 "dma_device_id": "system", 00:13:31.360 "dma_device_type": 1 00:13:31.360 }, 00:13:31.360 { 00:13:31.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.360 "dma_device_type": 2 00:13:31.360 } 00:13:31.360 ], 00:13:31.360 "driver_specific": {} 00:13:31.360 }' 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:31.360 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.620 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.620 "name": "BaseBdev3", 00:13:31.620 "aliases": [ 00:13:31.620 "edf1a1e7-4a2e-11ef-9c8e-7947904e2597" 00:13:31.620 ], 00:13:31.620 "product_name": "Malloc disk", 00:13:31.620 "block_size": 512, 00:13:31.620 "num_blocks": 65536, 00:13:31.620 "uuid": "edf1a1e7-4a2e-11ef-9c8e-7947904e2597", 00:13:31.620 "assigned_rate_limits": { 00:13:31.620 "rw_ios_per_sec": 0, 00:13:31.620 "rw_mbytes_per_sec": 0, 00:13:31.620 "r_mbytes_per_sec": 0, 00:13:31.620 "w_mbytes_per_sec": 0 00:13:31.620 }, 00:13:31.620 "claimed": true, 00:13:31.620 "claim_type": "exclusive_write", 00:13:31.620 "zoned": false, 00:13:31.620 "supported_io_types": { 00:13:31.620 "read": true, 00:13:31.620 "write": true, 00:13:31.620 "unmap": true, 00:13:31.620 "flush": true, 00:13:31.620 "reset": true, 00:13:31.620 "nvme_admin": false, 00:13:31.620 "nvme_io": false, 00:13:31.620 "nvme_io_md": false, 00:13:31.620 "write_zeroes": true, 00:13:31.620 "zcopy": true, 00:13:31.620 "get_zone_info": false, 00:13:31.620 "zone_management": false, 00:13:31.620 "zone_append": false, 00:13:31.620 "compare": false, 00:13:31.620 "compare_and_write": false, 00:13:31.620 "abort": true, 00:13:31.620 "seek_hole": false, 00:13:31.620 "seek_data": false, 00:13:31.620 "copy": true, 00:13:31.620 "nvme_iov_md": false 00:13:31.620 }, 00:13:31.620 "memory_domains": [ 00:13:31.620 { 00:13:31.620 "dma_device_id": "system", 00:13:31.620 "dma_device_type": 1 00:13:31.620 }, 00:13:31.620 { 00:13:31.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.620 "dma_device_type": 2 00:13:31.621 } 00:13:31.621 ], 00:13:31.621 "driver_specific": {} 00:13:31.621 }' 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:31.621 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:31.881 "name": "BaseBdev4", 00:13:31.881 "aliases": [ 00:13:31.881 "ee44a596-4a2e-11ef-9c8e-7947904e2597" 00:13:31.881 ], 00:13:31.881 "product_name": "Malloc disk", 00:13:31.881 "block_size": 512, 00:13:31.881 "num_blocks": 65536, 00:13:31.881 "uuid": "ee44a596-4a2e-11ef-9c8e-7947904e2597", 00:13:31.881 "assigned_rate_limits": { 00:13:31.881 "rw_ios_per_sec": 0, 00:13:31.881 "rw_mbytes_per_sec": 0, 00:13:31.881 "r_mbytes_per_sec": 0, 00:13:31.881 "w_mbytes_per_sec": 0 00:13:31.881 }, 00:13:31.881 "claimed": true, 00:13:31.881 "claim_type": "exclusive_write", 00:13:31.881 "zoned": false, 00:13:31.881 "supported_io_types": { 00:13:31.881 "read": true, 00:13:31.881 "write": true, 00:13:31.881 "unmap": true, 00:13:31.881 "flush": true, 00:13:31.881 "reset": true, 00:13:31.881 "nvme_admin": false, 00:13:31.881 "nvme_io": false, 00:13:31.881 "nvme_io_md": false, 00:13:31.881 "write_zeroes": true, 00:13:31.881 "zcopy": true, 00:13:31.881 "get_zone_info": false, 00:13:31.881 "zone_management": false, 00:13:31.881 "zone_append": false, 00:13:31.881 "compare": false, 00:13:31.881 "compare_and_write": false, 00:13:31.881 "abort": true, 00:13:31.881 "seek_hole": false, 00:13:31.881 "seek_data": false, 00:13:31.881 "copy": true, 00:13:31.881 "nvme_iov_md": false 00:13:31.881 }, 00:13:31.881 "memory_domains": [ 00:13:31.881 { 00:13:31.881 "dma_device_id": "system", 00:13:31.881 "dma_device_type": 1 00:13:31.881 }, 00:13:31.881 { 00:13:31.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.881 "dma_device_type": 2 00:13:31.881 } 00:13:31.881 ], 00:13:31.881 "driver_specific": {} 00:13:31.881 }' 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:31.881 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:32.142 02:38:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:32.142 [2024-07-25 02:38:19.002093] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.142 [2024-07-25 02:38:19.002105] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:32.142 [2024-07-25 02:38:19.002115] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.142 [2024-07-25 02:38:19.002192] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.142 [2024-07-25 02:38:19.002195] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x113662a34f00 name Existed_Raid, state offline 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62529 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 62529 ']' 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 62529 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 62529 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:32.142 killing process with pid 62529 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62529' 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 62529 00:13:32.142 [2024-07-25 02:38:19.032413] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.142 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 62529 00:13:32.402 [2024-07-25 02:38:19.069441] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:32.662 00:13:32.662 real 0m20.988s 00:13:32.662 user 0m37.469s 00:13:32.662 sys 0m3.761s 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.662 ************************************ 00:13:32.662 END TEST raid_state_function_test 00:13:32.662 ************************************ 00:13:32.662 02:38:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:32.662 02:38:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:32.662 02:38:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:32.662 02:38:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.662 02:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.662 ************************************ 00:13:32.662 START TEST raid_state_function_test_sb 00:13:32.662 ************************************ 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63324 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63324' 00:13:32.662 Process raid pid: 63324 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63324 /var/tmp/spdk-raid.sock 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63324 ']' 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.662 02:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.662 [2024-07-25 02:38:19.421529] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:13:32.662 [2024-07-25 02:38:19.421881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:33.231 EAL: TSC is not safe to use in SMP mode 00:13:33.231 EAL: TSC is not invariant 00:13:33.231 [2024-07-25 02:38:19.894908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.231 [2024-07-25 02:38:19.990436] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:33.231 [2024-07-25 02:38:19.992109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.231 [2024-07-25 02:38:19.992694] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.231 [2024-07-25 02:38:19.992705] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.489 02:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.489 02:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:33.489 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:33.747 [2024-07-25 02:38:20.455611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.747 [2024-07-25 02:38:20.455650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.747 [2024-07-25 02:38:20.455654] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.747 [2024-07-25 02:38:20.455660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.747 [2024-07-25 02:38:20.455662] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.747 [2024-07-25 02:38:20.455668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.747 [2024-07-25 02:38:20.455670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.748 [2024-07-25 02:38:20.455691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.748 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.007 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.007 "name": "Existed_Raid", 00:13:34.007 "uuid": "f49b7b15-4a2e-11ef-9c8e-7947904e2597", 00:13:34.007 "strip_size_kb": 0, 00:13:34.007 "state": "configuring", 00:13:34.007 "raid_level": "raid1", 00:13:34.007 "superblock": true, 00:13:34.007 "num_base_bdevs": 4, 00:13:34.007 "num_base_bdevs_discovered": 0, 00:13:34.007 "num_base_bdevs_operational": 4, 00:13:34.007 "base_bdevs_list": [ 00:13:34.007 { 00:13:34.007 "name": "BaseBdev1", 00:13:34.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.007 "is_configured": false, 00:13:34.007 "data_offset": 0, 00:13:34.007 "data_size": 0 00:13:34.007 }, 00:13:34.007 { 00:13:34.007 "name": "BaseBdev2", 00:13:34.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.007 "is_configured": false, 00:13:34.007 "data_offset": 0, 00:13:34.007 "data_size": 0 00:13:34.007 }, 00:13:34.007 { 00:13:34.007 "name": "BaseBdev3", 00:13:34.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.007 "is_configured": false, 00:13:34.007 "data_offset": 0, 00:13:34.007 "data_size": 0 00:13:34.007 }, 00:13:34.007 { 00:13:34.007 "name": "BaseBdev4", 00:13:34.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.007 "is_configured": false, 00:13:34.007 "data_offset": 0, 00:13:34.007 "data_size": 0 00:13:34.007 } 00:13:34.007 ] 00:13:34.007 }' 00:13:34.007 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.007 02:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.007 02:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:34.266 [2024-07-25 02:38:21.099637] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.266 [2024-07-25 02:38:21.099650] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3cc643434500 name Existed_Raid, state configuring 00:13:34.266 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:34.525 [2024-07-25 02:38:21.291661] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.525 [2024-07-25 02:38:21.291688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.525 [2024-07-25 02:38:21.291691] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.525 [2024-07-25 02:38:21.291712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.525 [2024-07-25 02:38:21.291715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.525 [2024-07-25 02:38:21.291720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.525 [2024-07-25 02:38:21.291722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.525 [2024-07-25 02:38:21.291728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.525 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.784 [2024-07-25 02:38:21.500562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.784 BaseBdev1 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:34.784 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.043 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.043 [ 00:13:35.043 { 00:13:35.043 "name": "BaseBdev1", 00:13:35.043 "aliases": [ 00:13:35.043 "f53acb76-4a2e-11ef-9c8e-7947904e2597" 00:13:35.043 ], 00:13:35.043 "product_name": "Malloc disk", 00:13:35.043 "block_size": 512, 00:13:35.043 "num_blocks": 65536, 00:13:35.043 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:35.043 "assigned_rate_limits": { 00:13:35.043 "rw_ios_per_sec": 0, 00:13:35.043 "rw_mbytes_per_sec": 0, 00:13:35.043 "r_mbytes_per_sec": 0, 00:13:35.043 "w_mbytes_per_sec": 0 00:13:35.043 }, 00:13:35.043 "claimed": true, 00:13:35.043 "claim_type": "exclusive_write", 00:13:35.043 "zoned": false, 00:13:35.043 "supported_io_types": { 00:13:35.043 "read": true, 00:13:35.043 "write": true, 00:13:35.043 "unmap": true, 00:13:35.043 "flush": true, 00:13:35.043 "reset": true, 00:13:35.043 "nvme_admin": false, 00:13:35.043 "nvme_io": false, 00:13:35.043 "nvme_io_md": false, 00:13:35.043 "write_zeroes": true, 00:13:35.043 "zcopy": true, 00:13:35.043 "get_zone_info": false, 00:13:35.043 "zone_management": false, 00:13:35.043 "zone_append": false, 00:13:35.043 "compare": false, 00:13:35.043 "compare_and_write": false, 00:13:35.043 "abort": true, 00:13:35.043 "seek_hole": false, 00:13:35.043 "seek_data": false, 00:13:35.043 "copy": true, 00:13:35.043 "nvme_iov_md": false 00:13:35.043 }, 00:13:35.044 "memory_domains": [ 00:13:35.044 { 00:13:35.044 "dma_device_id": "system", 00:13:35.044 "dma_device_type": 1 00:13:35.044 }, 00:13:35.044 { 00:13:35.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.044 "dma_device_type": 2 00:13:35.044 } 00:13:35.044 ], 00:13:35.044 "driver_specific": {} 00:13:35.044 } 00:13:35.044 ] 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.044 02:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.303 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.303 "name": "Existed_Raid", 00:13:35.303 "uuid": "f51b0d75-4a2e-11ef-9c8e-7947904e2597", 00:13:35.303 "strip_size_kb": 0, 00:13:35.303 "state": "configuring", 00:13:35.303 "raid_level": "raid1", 00:13:35.303 "superblock": true, 00:13:35.303 "num_base_bdevs": 4, 00:13:35.303 "num_base_bdevs_discovered": 1, 00:13:35.303 "num_base_bdevs_operational": 4, 00:13:35.303 "base_bdevs_list": [ 00:13:35.303 { 00:13:35.303 "name": "BaseBdev1", 00:13:35.303 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:35.303 "is_configured": true, 00:13:35.303 "data_offset": 2048, 00:13:35.303 "data_size": 63488 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": "BaseBdev2", 00:13:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.303 "is_configured": false, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 0 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": "BaseBdev3", 00:13:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.303 "is_configured": false, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 0 00:13:35.303 }, 00:13:35.303 { 00:13:35.303 "name": "BaseBdev4", 00:13:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.303 "is_configured": false, 00:13:35.303 "data_offset": 0, 00:13:35.303 "data_size": 0 00:13:35.303 } 00:13:35.303 ] 00:13:35.303 }' 00:13:35.303 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.303 02:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.562 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:35.821 [2024-07-25 02:38:22.563769] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.822 [2024-07-25 02:38:22.563788] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3cc643434500 name Existed_Raid, state configuring 00:13:35.822 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:36.081 [2024-07-25 02:38:22.743797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:36.081 [2024-07-25 02:38:22.744490] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:36.081 [2024-07-25 02:38:22.744525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:36.081 [2024-07-25 02:38:22.744528] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:36.081 [2024-07-25 02:38:22.744534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:36.081 [2024-07-25 02:38:22.744537] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:36.081 [2024-07-25 02:38:22.744546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.081 "name": "Existed_Raid", 00:13:36.081 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:36.081 "strip_size_kb": 0, 00:13:36.081 "state": "configuring", 00:13:36.081 "raid_level": "raid1", 00:13:36.081 "superblock": true, 00:13:36.081 "num_base_bdevs": 4, 00:13:36.081 "num_base_bdevs_discovered": 1, 00:13:36.081 "num_base_bdevs_operational": 4, 00:13:36.081 "base_bdevs_list": [ 00:13:36.081 { 00:13:36.081 "name": "BaseBdev1", 00:13:36.081 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:36.081 "is_configured": true, 00:13:36.081 "data_offset": 2048, 00:13:36.081 "data_size": 63488 00:13:36.081 }, 00:13:36.081 { 00:13:36.081 "name": "BaseBdev2", 00:13:36.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.081 "is_configured": false, 00:13:36.081 "data_offset": 0, 00:13:36.081 "data_size": 0 00:13:36.081 }, 00:13:36.081 { 00:13:36.081 "name": "BaseBdev3", 00:13:36.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.081 "is_configured": false, 00:13:36.081 "data_offset": 0, 00:13:36.081 "data_size": 0 00:13:36.081 }, 00:13:36.081 { 00:13:36.081 "name": "BaseBdev4", 00:13:36.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.081 "is_configured": false, 00:13:36.081 "data_offset": 0, 00:13:36.081 "data_size": 0 00:13:36.081 } 00:13:36.081 ] 00:13:36.081 }' 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.081 02:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.341 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.601 [2024-07-25 02:38:23.387957] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.601 BaseBdev2 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.601 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:36.860 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:36.860 [ 00:13:36.860 { 00:13:36.860 "name": "BaseBdev2", 00:13:36.860 "aliases": [ 00:13:36.860 "f65ae852-4a2e-11ef-9c8e-7947904e2597" 00:13:36.860 ], 00:13:36.860 "product_name": "Malloc disk", 00:13:36.860 "block_size": 512, 00:13:36.860 "num_blocks": 65536, 00:13:36.860 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:36.860 "assigned_rate_limits": { 00:13:36.860 "rw_ios_per_sec": 0, 00:13:36.860 "rw_mbytes_per_sec": 0, 00:13:36.860 "r_mbytes_per_sec": 0, 00:13:36.860 "w_mbytes_per_sec": 0 00:13:36.860 }, 00:13:36.860 "claimed": true, 00:13:36.860 "claim_type": "exclusive_write", 00:13:36.860 "zoned": false, 00:13:36.860 "supported_io_types": { 00:13:36.860 "read": true, 00:13:36.860 "write": true, 00:13:36.860 "unmap": true, 00:13:36.860 "flush": true, 00:13:36.860 "reset": true, 00:13:36.860 "nvme_admin": false, 00:13:36.860 "nvme_io": false, 00:13:36.860 "nvme_io_md": false, 00:13:36.860 "write_zeroes": true, 00:13:36.860 "zcopy": true, 00:13:36.860 "get_zone_info": false, 00:13:36.860 "zone_management": false, 00:13:36.860 "zone_append": false, 00:13:36.860 "compare": false, 00:13:36.860 "compare_and_write": false, 00:13:36.860 "abort": true, 00:13:36.860 "seek_hole": false, 00:13:36.860 "seek_data": false, 00:13:36.860 "copy": true, 00:13:36.860 "nvme_iov_md": false 00:13:36.860 }, 00:13:36.860 "memory_domains": [ 00:13:36.860 { 00:13:36.860 "dma_device_id": "system", 00:13:36.860 "dma_device_type": 1 00:13:36.860 }, 00:13:36.860 { 00:13:36.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.860 "dma_device_type": 2 00:13:36.860 } 00:13:36.860 ], 00:13:36.860 "driver_specific": {} 00:13:36.860 } 00:13:36.860 ] 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.119 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.119 "name": "Existed_Raid", 00:13:37.119 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:37.119 "strip_size_kb": 0, 00:13:37.119 "state": "configuring", 00:13:37.119 "raid_level": "raid1", 00:13:37.119 "superblock": true, 00:13:37.119 "num_base_bdevs": 4, 00:13:37.119 "num_base_bdevs_discovered": 2, 00:13:37.119 "num_base_bdevs_operational": 4, 00:13:37.119 "base_bdevs_list": [ 00:13:37.119 { 00:13:37.120 "name": "BaseBdev1", 00:13:37.120 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 2048, 00:13:37.120 "data_size": 63488 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev2", 00:13:37.120 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:37.120 "is_configured": true, 00:13:37.120 "data_offset": 2048, 00:13:37.120 "data_size": 63488 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev3", 00:13:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.120 "is_configured": false, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 0 00:13:37.120 }, 00:13:37.120 { 00:13:37.120 "name": "BaseBdev4", 00:13:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.120 "is_configured": false, 00:13:37.120 "data_offset": 0, 00:13:37.120 "data_size": 0 00:13:37.120 } 00:13:37.120 ] 00:13:37.120 }' 00:13:37.120 02:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.120 02:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.379 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.646 [2024-07-25 02:38:24.412070] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.646 BaseBdev3 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:37.646 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:37.647 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:37.924 [ 00:13:37.924 { 00:13:37.924 "name": "BaseBdev3", 00:13:37.924 "aliases": [ 00:13:37.924 "f6f72d30-4a2e-11ef-9c8e-7947904e2597" 00:13:37.924 ], 00:13:37.924 "product_name": "Malloc disk", 00:13:37.924 "block_size": 512, 00:13:37.924 "num_blocks": 65536, 00:13:37.924 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:37.924 "assigned_rate_limits": { 00:13:37.924 "rw_ios_per_sec": 0, 00:13:37.924 "rw_mbytes_per_sec": 0, 00:13:37.924 "r_mbytes_per_sec": 0, 00:13:37.924 "w_mbytes_per_sec": 0 00:13:37.924 }, 00:13:37.924 "claimed": true, 00:13:37.924 "claim_type": "exclusive_write", 00:13:37.924 "zoned": false, 00:13:37.924 "supported_io_types": { 00:13:37.924 "read": true, 00:13:37.924 "write": true, 00:13:37.924 "unmap": true, 00:13:37.924 "flush": true, 00:13:37.924 "reset": true, 00:13:37.924 "nvme_admin": false, 00:13:37.924 "nvme_io": false, 00:13:37.924 "nvme_io_md": false, 00:13:37.924 "write_zeroes": true, 00:13:37.924 "zcopy": true, 00:13:37.924 "get_zone_info": false, 00:13:37.924 "zone_management": false, 00:13:37.924 "zone_append": false, 00:13:37.924 "compare": false, 00:13:37.924 "compare_and_write": false, 00:13:37.924 "abort": true, 00:13:37.924 "seek_hole": false, 00:13:37.924 "seek_data": false, 00:13:37.924 "copy": true, 00:13:37.924 "nvme_iov_md": false 00:13:37.924 }, 00:13:37.924 "memory_domains": [ 00:13:37.924 { 00:13:37.924 "dma_device_id": "system", 00:13:37.924 "dma_device_type": 1 00:13:37.924 }, 00:13:37.924 { 00:13:37.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.924 "dma_device_type": 2 00:13:37.924 } 00:13:37.924 ], 00:13:37.924 "driver_specific": {} 00:13:37.924 } 00:13:37.924 ] 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.924 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.198 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.198 02:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.198 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.198 "name": "Existed_Raid", 00:13:38.198 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:38.198 "strip_size_kb": 0, 00:13:38.198 "state": "configuring", 00:13:38.198 "raid_level": "raid1", 00:13:38.198 "superblock": true, 00:13:38.198 "num_base_bdevs": 4, 00:13:38.198 "num_base_bdevs_discovered": 3, 00:13:38.198 "num_base_bdevs_operational": 4, 00:13:38.198 "base_bdevs_list": [ 00:13:38.198 { 00:13:38.198 "name": "BaseBdev1", 00:13:38.198 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:38.198 "is_configured": true, 00:13:38.198 "data_offset": 2048, 00:13:38.198 "data_size": 63488 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "name": "BaseBdev2", 00:13:38.198 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:38.198 "is_configured": true, 00:13:38.198 "data_offset": 2048, 00:13:38.198 "data_size": 63488 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "name": "BaseBdev3", 00:13:38.198 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:38.198 "is_configured": true, 00:13:38.198 "data_offset": 2048, 00:13:38.198 "data_size": 63488 00:13:38.198 }, 00:13:38.198 { 00:13:38.198 "name": "BaseBdev4", 00:13:38.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.198 "is_configured": false, 00:13:38.198 "data_offset": 0, 00:13:38.198 "data_size": 0 00:13:38.198 } 00:13:38.198 ] 00:13:38.198 }' 00:13:38.198 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.198 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.458 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.717 [2024-07-25 02:38:25.448150] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.717 [2024-07-25 02:38:25.448199] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3cc643434a00 00:13:38.717 [2024-07-25 02:38:25.448203] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.717 [2024-07-25 02:38:25.448220] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3cc643497e20 00:13:38.717 [2024-07-25 02:38:25.448264] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3cc643434a00 00:13:38.717 [2024-07-25 02:38:25.448267] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3cc643434a00 00:13:38.717 [2024-07-25 02:38:25.448282] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.717 BaseBdev4 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:38.718 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.977 [ 00:13:38.977 { 00:13:38.977 "name": "BaseBdev4", 00:13:38.977 "aliases": [ 00:13:38.977 "f795456c-4a2e-11ef-9c8e-7947904e2597" 00:13:38.977 ], 00:13:38.977 "product_name": "Malloc disk", 00:13:38.977 "block_size": 512, 00:13:38.977 "num_blocks": 65536, 00:13:38.977 "uuid": "f795456c-4a2e-11ef-9c8e-7947904e2597", 00:13:38.977 "assigned_rate_limits": { 00:13:38.977 "rw_ios_per_sec": 0, 00:13:38.977 "rw_mbytes_per_sec": 0, 00:13:38.977 "r_mbytes_per_sec": 0, 00:13:38.977 "w_mbytes_per_sec": 0 00:13:38.977 }, 00:13:38.977 "claimed": true, 00:13:38.977 "claim_type": "exclusive_write", 00:13:38.977 "zoned": false, 00:13:38.977 "supported_io_types": { 00:13:38.977 "read": true, 00:13:38.977 "write": true, 00:13:38.977 "unmap": true, 00:13:38.977 "flush": true, 00:13:38.977 "reset": true, 00:13:38.977 "nvme_admin": false, 00:13:38.977 "nvme_io": false, 00:13:38.977 "nvme_io_md": false, 00:13:38.977 "write_zeroes": true, 00:13:38.977 "zcopy": true, 00:13:38.977 "get_zone_info": false, 00:13:38.977 "zone_management": false, 00:13:38.977 "zone_append": false, 00:13:38.977 "compare": false, 00:13:38.977 "compare_and_write": false, 00:13:38.977 "abort": true, 00:13:38.977 "seek_hole": false, 00:13:38.977 "seek_data": false, 00:13:38.977 "copy": true, 00:13:38.977 "nvme_iov_md": false 00:13:38.977 }, 00:13:38.977 "memory_domains": [ 00:13:38.977 { 00:13:38.977 "dma_device_id": "system", 00:13:38.977 "dma_device_type": 1 00:13:38.977 }, 00:13:38.977 { 00:13:38.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.977 "dma_device_type": 2 00:13:38.977 } 00:13:38.977 ], 00:13:38.977 "driver_specific": {} 00:13:38.977 } 00:13:38.977 ] 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.977 02:38:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.237 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.237 "name": "Existed_Raid", 00:13:39.237 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:39.237 "strip_size_kb": 0, 00:13:39.237 "state": "online", 00:13:39.237 "raid_level": "raid1", 00:13:39.237 "superblock": true, 00:13:39.237 "num_base_bdevs": 4, 00:13:39.237 "num_base_bdevs_discovered": 4, 00:13:39.237 "num_base_bdevs_operational": 4, 00:13:39.237 "base_bdevs_list": [ 00:13:39.237 { 00:13:39.237 "name": "BaseBdev1", 00:13:39.237 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:39.237 "is_configured": true, 00:13:39.237 "data_offset": 2048, 00:13:39.237 "data_size": 63488 00:13:39.237 }, 00:13:39.237 { 00:13:39.237 "name": "BaseBdev2", 00:13:39.237 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:39.237 "is_configured": true, 00:13:39.237 "data_offset": 2048, 00:13:39.237 "data_size": 63488 00:13:39.237 }, 00:13:39.237 { 00:13:39.237 "name": "BaseBdev3", 00:13:39.237 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:39.237 "is_configured": true, 00:13:39.237 "data_offset": 2048, 00:13:39.237 "data_size": 63488 00:13:39.237 }, 00:13:39.237 { 00:13:39.237 "name": "BaseBdev4", 00:13:39.237 "uuid": "f795456c-4a2e-11ef-9c8e-7947904e2597", 00:13:39.237 "is_configured": true, 00:13:39.237 "data_offset": 2048, 00:13:39.237 "data_size": 63488 00:13:39.237 } 00:13:39.237 ] 00:13:39.237 }' 00:13:39.237 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.237 02:38:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:39.497 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:39.757 [2024-07-25 02:38:26.512190] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.757 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:39.757 "name": "Existed_Raid", 00:13:39.757 "aliases": [ 00:13:39.757 "f5f8a164-4a2e-11ef-9c8e-7947904e2597" 00:13:39.757 ], 00:13:39.757 "product_name": "Raid Volume", 00:13:39.758 "block_size": 512, 00:13:39.758 "num_blocks": 63488, 00:13:39.758 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "assigned_rate_limits": { 00:13:39.758 "rw_ios_per_sec": 0, 00:13:39.758 "rw_mbytes_per_sec": 0, 00:13:39.758 "r_mbytes_per_sec": 0, 00:13:39.758 "w_mbytes_per_sec": 0 00:13:39.758 }, 00:13:39.758 "claimed": false, 00:13:39.758 "zoned": false, 00:13:39.758 "supported_io_types": { 00:13:39.758 "read": true, 00:13:39.758 "write": true, 00:13:39.758 "unmap": false, 00:13:39.758 "flush": false, 00:13:39.758 "reset": true, 00:13:39.758 "nvme_admin": false, 00:13:39.758 "nvme_io": false, 00:13:39.758 "nvme_io_md": false, 00:13:39.758 "write_zeroes": true, 00:13:39.758 "zcopy": false, 00:13:39.758 "get_zone_info": false, 00:13:39.758 "zone_management": false, 00:13:39.758 "zone_append": false, 00:13:39.758 "compare": false, 00:13:39.758 "compare_and_write": false, 00:13:39.758 "abort": false, 00:13:39.758 "seek_hole": false, 00:13:39.758 "seek_data": false, 00:13:39.758 "copy": false, 00:13:39.758 "nvme_iov_md": false 00:13:39.758 }, 00:13:39.758 "memory_domains": [ 00:13:39.758 { 00:13:39.758 "dma_device_id": "system", 00:13:39.758 "dma_device_type": 1 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.758 "dma_device_type": 2 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "system", 00:13:39.758 "dma_device_type": 1 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.758 "dma_device_type": 2 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "system", 00:13:39.758 "dma_device_type": 1 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.758 "dma_device_type": 2 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "system", 00:13:39.758 "dma_device_type": 1 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.758 "dma_device_type": 2 00:13:39.758 } 00:13:39.758 ], 00:13:39.758 "driver_specific": { 00:13:39.758 "raid": { 00:13:39.758 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "strip_size_kb": 0, 00:13:39.758 "state": "online", 00:13:39.758 "raid_level": "raid1", 00:13:39.758 "superblock": true, 00:13:39.758 "num_base_bdevs": 4, 00:13:39.758 "num_base_bdevs_discovered": 4, 00:13:39.758 "num_base_bdevs_operational": 4, 00:13:39.758 "base_bdevs_list": [ 00:13:39.758 { 00:13:39.758 "name": "BaseBdev1", 00:13:39.758 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "is_configured": true, 00:13:39.758 "data_offset": 2048, 00:13:39.758 "data_size": 63488 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "name": "BaseBdev2", 00:13:39.758 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "is_configured": true, 00:13:39.758 "data_offset": 2048, 00:13:39.758 "data_size": 63488 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "name": "BaseBdev3", 00:13:39.758 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "is_configured": true, 00:13:39.758 "data_offset": 2048, 00:13:39.758 "data_size": 63488 00:13:39.758 }, 00:13:39.758 { 00:13:39.758 "name": "BaseBdev4", 00:13:39.758 "uuid": "f795456c-4a2e-11ef-9c8e-7947904e2597", 00:13:39.758 "is_configured": true, 00:13:39.758 "data_offset": 2048, 00:13:39.758 "data_size": 63488 00:13:39.758 } 00:13:39.758 ] 00:13:39.758 } 00:13:39.758 } 00:13:39.758 }' 00:13:39.758 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.758 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:39.758 BaseBdev2 00:13:39.758 BaseBdev3 00:13:39.758 BaseBdev4' 00:13:39.758 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:39.758 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:39.758 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.018 "name": "BaseBdev1", 00:13:40.018 "aliases": [ 00:13:40.018 "f53acb76-4a2e-11ef-9c8e-7947904e2597" 00:13:40.018 ], 00:13:40.018 "product_name": "Malloc disk", 00:13:40.018 "block_size": 512, 00:13:40.018 "num_blocks": 65536, 00:13:40.018 "uuid": "f53acb76-4a2e-11ef-9c8e-7947904e2597", 00:13:40.018 "assigned_rate_limits": { 00:13:40.018 "rw_ios_per_sec": 0, 00:13:40.018 "rw_mbytes_per_sec": 0, 00:13:40.018 "r_mbytes_per_sec": 0, 00:13:40.018 "w_mbytes_per_sec": 0 00:13:40.018 }, 00:13:40.018 "claimed": true, 00:13:40.018 "claim_type": "exclusive_write", 00:13:40.018 "zoned": false, 00:13:40.018 "supported_io_types": { 00:13:40.018 "read": true, 00:13:40.018 "write": true, 00:13:40.018 "unmap": true, 00:13:40.018 "flush": true, 00:13:40.018 "reset": true, 00:13:40.018 "nvme_admin": false, 00:13:40.018 "nvme_io": false, 00:13:40.018 "nvme_io_md": false, 00:13:40.018 "write_zeroes": true, 00:13:40.018 "zcopy": true, 00:13:40.018 "get_zone_info": false, 00:13:40.018 "zone_management": false, 00:13:40.018 "zone_append": false, 00:13:40.018 "compare": false, 00:13:40.018 "compare_and_write": false, 00:13:40.018 "abort": true, 00:13:40.018 "seek_hole": false, 00:13:40.018 "seek_data": false, 00:13:40.018 "copy": true, 00:13:40.018 "nvme_iov_md": false 00:13:40.018 }, 00:13:40.018 "memory_domains": [ 00:13:40.018 { 00:13:40.018 "dma_device_id": "system", 00:13:40.018 "dma_device_type": 1 00:13:40.018 }, 00:13:40.018 { 00:13:40.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.018 "dma_device_type": 2 00:13:40.018 } 00:13:40.018 ], 00:13:40.018 "driver_specific": {} 00:13:40.018 }' 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:40.018 02:38:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.278 "name": "BaseBdev2", 00:13:40.278 "aliases": [ 00:13:40.278 "f65ae852-4a2e-11ef-9c8e-7947904e2597" 00:13:40.278 ], 00:13:40.278 "product_name": "Malloc disk", 00:13:40.278 "block_size": 512, 00:13:40.278 "num_blocks": 65536, 00:13:40.278 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:40.278 "assigned_rate_limits": { 00:13:40.278 "rw_ios_per_sec": 0, 00:13:40.278 "rw_mbytes_per_sec": 0, 00:13:40.278 "r_mbytes_per_sec": 0, 00:13:40.278 "w_mbytes_per_sec": 0 00:13:40.278 }, 00:13:40.278 "claimed": true, 00:13:40.278 "claim_type": "exclusive_write", 00:13:40.278 "zoned": false, 00:13:40.278 "supported_io_types": { 00:13:40.278 "read": true, 00:13:40.278 "write": true, 00:13:40.278 "unmap": true, 00:13:40.278 "flush": true, 00:13:40.278 "reset": true, 00:13:40.278 "nvme_admin": false, 00:13:40.278 "nvme_io": false, 00:13:40.278 "nvme_io_md": false, 00:13:40.278 "write_zeroes": true, 00:13:40.278 "zcopy": true, 00:13:40.278 "get_zone_info": false, 00:13:40.278 "zone_management": false, 00:13:40.278 "zone_append": false, 00:13:40.278 "compare": false, 00:13:40.278 "compare_and_write": false, 00:13:40.278 "abort": true, 00:13:40.278 "seek_hole": false, 00:13:40.278 "seek_data": false, 00:13:40.278 "copy": true, 00:13:40.278 "nvme_iov_md": false 00:13:40.278 }, 00:13:40.278 "memory_domains": [ 00:13:40.278 { 00:13:40.278 "dma_device_id": "system", 00:13:40.278 "dma_device_type": 1 00:13:40.278 }, 00:13:40.278 { 00:13:40.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.278 "dma_device_type": 2 00:13:40.278 } 00:13:40.278 ], 00:13:40.278 "driver_specific": {} 00:13:40.278 }' 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.278 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:40.279 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.538 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.538 "name": "BaseBdev3", 00:13:40.538 "aliases": [ 00:13:40.538 "f6f72d30-4a2e-11ef-9c8e-7947904e2597" 00:13:40.538 ], 00:13:40.538 "product_name": "Malloc disk", 00:13:40.538 "block_size": 512, 00:13:40.538 "num_blocks": 65536, 00:13:40.538 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:40.538 "assigned_rate_limits": { 00:13:40.538 "rw_ios_per_sec": 0, 00:13:40.538 "rw_mbytes_per_sec": 0, 00:13:40.538 "r_mbytes_per_sec": 0, 00:13:40.538 "w_mbytes_per_sec": 0 00:13:40.538 }, 00:13:40.538 "claimed": true, 00:13:40.538 "claim_type": "exclusive_write", 00:13:40.538 "zoned": false, 00:13:40.538 "supported_io_types": { 00:13:40.538 "read": true, 00:13:40.538 "write": true, 00:13:40.538 "unmap": true, 00:13:40.538 "flush": true, 00:13:40.538 "reset": true, 00:13:40.538 "nvme_admin": false, 00:13:40.538 "nvme_io": false, 00:13:40.538 "nvme_io_md": false, 00:13:40.538 "write_zeroes": true, 00:13:40.538 "zcopy": true, 00:13:40.538 "get_zone_info": false, 00:13:40.539 "zone_management": false, 00:13:40.539 "zone_append": false, 00:13:40.539 "compare": false, 00:13:40.539 "compare_and_write": false, 00:13:40.539 "abort": true, 00:13:40.539 "seek_hole": false, 00:13:40.539 "seek_data": false, 00:13:40.539 "copy": true, 00:13:40.539 "nvme_iov_md": false 00:13:40.539 }, 00:13:40.539 "memory_domains": [ 00:13:40.539 { 00:13:40.539 "dma_device_id": "system", 00:13:40.539 "dma_device_type": 1 00:13:40.539 }, 00:13:40.539 { 00:13:40.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.539 "dma_device_type": 2 00:13:40.539 } 00:13:40.539 ], 00:13:40.539 "driver_specific": {} 00:13:40.539 }' 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:40.539 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.799 "name": "BaseBdev4", 00:13:40.799 "aliases": [ 00:13:40.799 "f795456c-4a2e-11ef-9c8e-7947904e2597" 00:13:40.799 ], 00:13:40.799 "product_name": "Malloc disk", 00:13:40.799 "block_size": 512, 00:13:40.799 "num_blocks": 65536, 00:13:40.799 "uuid": "f795456c-4a2e-11ef-9c8e-7947904e2597", 00:13:40.799 "assigned_rate_limits": { 00:13:40.799 "rw_ios_per_sec": 0, 00:13:40.799 "rw_mbytes_per_sec": 0, 00:13:40.799 "r_mbytes_per_sec": 0, 00:13:40.799 "w_mbytes_per_sec": 0 00:13:40.799 }, 00:13:40.799 "claimed": true, 00:13:40.799 "claim_type": "exclusive_write", 00:13:40.799 "zoned": false, 00:13:40.799 "supported_io_types": { 00:13:40.799 "read": true, 00:13:40.799 "write": true, 00:13:40.799 "unmap": true, 00:13:40.799 "flush": true, 00:13:40.799 "reset": true, 00:13:40.799 "nvme_admin": false, 00:13:40.799 "nvme_io": false, 00:13:40.799 "nvme_io_md": false, 00:13:40.799 "write_zeroes": true, 00:13:40.799 "zcopy": true, 00:13:40.799 "get_zone_info": false, 00:13:40.799 "zone_management": false, 00:13:40.799 "zone_append": false, 00:13:40.799 "compare": false, 00:13:40.799 "compare_and_write": false, 00:13:40.799 "abort": true, 00:13:40.799 "seek_hole": false, 00:13:40.799 "seek_data": false, 00:13:40.799 "copy": true, 00:13:40.799 "nvme_iov_md": false 00:13:40.799 }, 00:13:40.799 "memory_domains": [ 00:13:40.799 { 00:13:40.799 "dma_device_id": "system", 00:13:40.799 "dma_device_type": 1 00:13:40.799 }, 00:13:40.799 { 00:13:40.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.799 "dma_device_type": 2 00:13:40.799 } 00:13:40.799 ], 00:13:40.799 "driver_specific": {} 00:13:40.799 }' 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.799 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:41.059 [2024-07-25 02:38:27.880294] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.059 02:38:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.319 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.319 "name": "Existed_Raid", 00:13:41.319 "uuid": "f5f8a164-4a2e-11ef-9c8e-7947904e2597", 00:13:41.319 "strip_size_kb": 0, 00:13:41.319 "state": "online", 00:13:41.319 "raid_level": "raid1", 00:13:41.319 "superblock": true, 00:13:41.319 "num_base_bdevs": 4, 00:13:41.319 "num_base_bdevs_discovered": 3, 00:13:41.319 "num_base_bdevs_operational": 3, 00:13:41.319 "base_bdevs_list": [ 00:13:41.319 { 00:13:41.319 "name": null, 00:13:41.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.319 "is_configured": false, 00:13:41.319 "data_offset": 2048, 00:13:41.319 "data_size": 63488 00:13:41.319 }, 00:13:41.319 { 00:13:41.319 "name": "BaseBdev2", 00:13:41.319 "uuid": "f65ae852-4a2e-11ef-9c8e-7947904e2597", 00:13:41.319 "is_configured": true, 00:13:41.319 "data_offset": 2048, 00:13:41.319 "data_size": 63488 00:13:41.319 }, 00:13:41.319 { 00:13:41.319 "name": "BaseBdev3", 00:13:41.319 "uuid": "f6f72d30-4a2e-11ef-9c8e-7947904e2597", 00:13:41.319 "is_configured": true, 00:13:41.319 "data_offset": 2048, 00:13:41.319 "data_size": 63488 00:13:41.319 }, 00:13:41.319 { 00:13:41.319 "name": "BaseBdev4", 00:13:41.319 "uuid": "f795456c-4a2e-11ef-9c8e-7947904e2597", 00:13:41.319 "is_configured": true, 00:13:41.319 "data_offset": 2048, 00:13:41.319 "data_size": 63488 00:13:41.319 } 00:13:41.319 ] 00:13:41.319 }' 00:13:41.319 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.319 02:38:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:41.579 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:41.579 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.579 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:41.839 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:41.839 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:41.839 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:41.839 [2024-07-25 02:38:28.717027] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.099 02:38:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:42.359 [2024-07-25 02:38:29.105744] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.359 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:42.359 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:42.359 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.359 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:42.618 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:42.618 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.618 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:42.618 [2024-07-25 02:38:29.474553] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:42.618 [2024-07-25 02:38:29.474584] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.619 [2024-07-25 02:38:29.479357] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.619 [2024-07-25 02:38:29.479368] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.619 [2024-07-25 02:38:29.479371] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3cc643434a00 name Existed_Raid, state offline 00:13:42.619 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:42.619 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:42.619 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.619 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:42.878 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.137 BaseBdev2 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.137 02:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.396 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.396 [ 00:13:43.396 { 00:13:43.396 "name": "BaseBdev2", 00:13:43.396 "aliases": [ 00:13:43.396 "fa3528d9-4a2e-11ef-9c8e-7947904e2597" 00:13:43.396 ], 00:13:43.396 "product_name": "Malloc disk", 00:13:43.396 "block_size": 512, 00:13:43.396 "num_blocks": 65536, 00:13:43.396 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:43.396 "assigned_rate_limits": { 00:13:43.396 "rw_ios_per_sec": 0, 00:13:43.396 "rw_mbytes_per_sec": 0, 00:13:43.396 "r_mbytes_per_sec": 0, 00:13:43.396 "w_mbytes_per_sec": 0 00:13:43.396 }, 00:13:43.396 "claimed": false, 00:13:43.396 "zoned": false, 00:13:43.396 "supported_io_types": { 00:13:43.396 "read": true, 00:13:43.396 "write": true, 00:13:43.396 "unmap": true, 00:13:43.396 "flush": true, 00:13:43.396 "reset": true, 00:13:43.396 "nvme_admin": false, 00:13:43.396 "nvme_io": false, 00:13:43.396 "nvme_io_md": false, 00:13:43.396 "write_zeroes": true, 00:13:43.396 "zcopy": true, 00:13:43.396 "get_zone_info": false, 00:13:43.396 "zone_management": false, 00:13:43.396 "zone_append": false, 00:13:43.396 "compare": false, 00:13:43.396 "compare_and_write": false, 00:13:43.396 "abort": true, 00:13:43.396 "seek_hole": false, 00:13:43.396 "seek_data": false, 00:13:43.396 "copy": true, 00:13:43.396 "nvme_iov_md": false 00:13:43.396 }, 00:13:43.396 "memory_domains": [ 00:13:43.396 { 00:13:43.396 "dma_device_id": "system", 00:13:43.396 "dma_device_type": 1 00:13:43.396 }, 00:13:43.396 { 00:13:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.396 "dma_device_type": 2 00:13:43.396 } 00:13:43.396 ], 00:13:43.396 "driver_specific": {} 00:13:43.396 } 00:13:43.396 ] 00:13:43.396 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:43.396 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:43.396 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:43.396 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.656 BaseBdev3 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.656 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.916 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:43.916 [ 00:13:43.916 { 00:13:43.916 "name": "BaseBdev3", 00:13:43.916 "aliases": [ 00:13:43.916 "fa882ca7-4a2e-11ef-9c8e-7947904e2597" 00:13:43.916 ], 00:13:43.916 "product_name": "Malloc disk", 00:13:43.916 "block_size": 512, 00:13:43.916 "num_blocks": 65536, 00:13:43.916 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:43.916 "assigned_rate_limits": { 00:13:43.916 "rw_ios_per_sec": 0, 00:13:43.916 "rw_mbytes_per_sec": 0, 00:13:43.916 "r_mbytes_per_sec": 0, 00:13:43.916 "w_mbytes_per_sec": 0 00:13:43.916 }, 00:13:43.916 "claimed": false, 00:13:43.916 "zoned": false, 00:13:43.916 "supported_io_types": { 00:13:43.916 "read": true, 00:13:43.916 "write": true, 00:13:43.916 "unmap": true, 00:13:43.916 "flush": true, 00:13:43.916 "reset": true, 00:13:43.916 "nvme_admin": false, 00:13:43.916 "nvme_io": false, 00:13:43.916 "nvme_io_md": false, 00:13:43.916 "write_zeroes": true, 00:13:43.916 "zcopy": true, 00:13:43.916 "get_zone_info": false, 00:13:43.916 "zone_management": false, 00:13:43.916 "zone_append": false, 00:13:43.916 "compare": false, 00:13:43.916 "compare_and_write": false, 00:13:43.916 "abort": true, 00:13:43.916 "seek_hole": false, 00:13:43.916 "seek_data": false, 00:13:43.916 "copy": true, 00:13:43.916 "nvme_iov_md": false 00:13:43.916 }, 00:13:43.916 "memory_domains": [ 00:13:43.916 { 00:13:43.916 "dma_device_id": "system", 00:13:43.916 "dma_device_type": 1 00:13:43.916 }, 00:13:43.916 { 00:13:43.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.916 "dma_device_type": 2 00:13:43.916 } 00:13:43.916 ], 00:13:43.916 "driver_specific": {} 00:13:43.916 } 00:13:43.916 ] 00:13:43.916 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:43.916 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:43.916 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:43.916 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:44.176 BaseBdev4 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.176 02:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.435 02:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:44.435 [ 00:13:44.435 { 00:13:44.435 "name": "BaseBdev4", 00:13:44.435 "aliases": [ 00:13:44.435 "fadd051c-4a2e-11ef-9c8e-7947904e2597" 00:13:44.435 ], 00:13:44.435 "product_name": "Malloc disk", 00:13:44.435 "block_size": 512, 00:13:44.435 "num_blocks": 65536, 00:13:44.435 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:44.435 "assigned_rate_limits": { 00:13:44.435 "rw_ios_per_sec": 0, 00:13:44.435 "rw_mbytes_per_sec": 0, 00:13:44.435 "r_mbytes_per_sec": 0, 00:13:44.435 "w_mbytes_per_sec": 0 00:13:44.435 }, 00:13:44.435 "claimed": false, 00:13:44.435 "zoned": false, 00:13:44.435 "supported_io_types": { 00:13:44.435 "read": true, 00:13:44.435 "write": true, 00:13:44.435 "unmap": true, 00:13:44.435 "flush": true, 00:13:44.435 "reset": true, 00:13:44.435 "nvme_admin": false, 00:13:44.435 "nvme_io": false, 00:13:44.435 "nvme_io_md": false, 00:13:44.435 "write_zeroes": true, 00:13:44.435 "zcopy": true, 00:13:44.435 "get_zone_info": false, 00:13:44.435 "zone_management": false, 00:13:44.435 "zone_append": false, 00:13:44.435 "compare": false, 00:13:44.435 "compare_and_write": false, 00:13:44.435 "abort": true, 00:13:44.435 "seek_hole": false, 00:13:44.435 "seek_data": false, 00:13:44.435 "copy": true, 00:13:44.435 "nvme_iov_md": false 00:13:44.435 }, 00:13:44.435 "memory_domains": [ 00:13:44.435 { 00:13:44.435 "dma_device_id": "system", 00:13:44.435 "dma_device_type": 1 00:13:44.435 }, 00:13:44.435 { 00:13:44.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.435 "dma_device_type": 2 00:13:44.435 } 00:13:44.435 ], 00:13:44.435 "driver_specific": {} 00:13:44.435 } 00:13:44.435 ] 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:44.694 [2024-07-25 02:38:31.491506] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.694 [2024-07-25 02:38:31.491540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.694 [2024-07-25 02:38:31.491546] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.694 [2024-07-25 02:38:31.491937] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.694 [2024-07-25 02:38:31.491953] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:44.694 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.695 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.954 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.954 "name": "Existed_Raid", 00:13:44.954 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:44.954 "strip_size_kb": 0, 00:13:44.954 "state": "configuring", 00:13:44.954 "raid_level": "raid1", 00:13:44.954 "superblock": true, 00:13:44.954 "num_base_bdevs": 4, 00:13:44.954 "num_base_bdevs_discovered": 3, 00:13:44.954 "num_base_bdevs_operational": 4, 00:13:44.954 "base_bdevs_list": [ 00:13:44.954 { 00:13:44.954 "name": "BaseBdev1", 00:13:44.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.954 "is_configured": false, 00:13:44.954 "data_offset": 0, 00:13:44.954 "data_size": 0 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev2", 00:13:44.954 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 2048, 00:13:44.954 "data_size": 63488 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev3", 00:13:44.954 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 2048, 00:13:44.954 "data_size": 63488 00:13:44.954 }, 00:13:44.954 { 00:13:44.954 "name": "BaseBdev4", 00:13:44.954 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:44.954 "is_configured": true, 00:13:44.954 "data_offset": 2048, 00:13:44.954 "data_size": 63488 00:13:44.954 } 00:13:44.954 ] 00:13:44.954 }' 00:13:44.954 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.954 02:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.214 02:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:45.474 [2024-07-25 02:38:32.159562] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.474 "name": "Existed_Raid", 00:13:45.474 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:45.474 "strip_size_kb": 0, 00:13:45.474 "state": "configuring", 00:13:45.474 "raid_level": "raid1", 00:13:45.474 "superblock": true, 00:13:45.474 "num_base_bdevs": 4, 00:13:45.474 "num_base_bdevs_discovered": 2, 00:13:45.474 "num_base_bdevs_operational": 4, 00:13:45.474 "base_bdevs_list": [ 00:13:45.474 { 00:13:45.474 "name": "BaseBdev1", 00:13:45.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.474 "is_configured": false, 00:13:45.474 "data_offset": 0, 00:13:45.474 "data_size": 0 00:13:45.474 }, 00:13:45.474 { 00:13:45.474 "name": null, 00:13:45.474 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:45.474 "is_configured": false, 00:13:45.474 "data_offset": 2048, 00:13:45.474 "data_size": 63488 00:13:45.474 }, 00:13:45.474 { 00:13:45.474 "name": "BaseBdev3", 00:13:45.474 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:45.474 "is_configured": true, 00:13:45.474 "data_offset": 2048, 00:13:45.474 "data_size": 63488 00:13:45.474 }, 00:13:45.474 { 00:13:45.474 "name": "BaseBdev4", 00:13:45.474 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:45.474 "is_configured": true, 00:13:45.474 "data_offset": 2048, 00:13:45.474 "data_size": 63488 00:13:45.474 } 00:13:45.474 ] 00:13:45.474 }' 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.474 02:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.044 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.044 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:46.044 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:46.044 02:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.303 [2024-07-25 02:38:33.011732] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.303 BaseBdev1 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:46.303 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.564 [ 00:13:46.564 { 00:13:46.564 "name": "BaseBdev1", 00:13:46.564 "aliases": [ 00:13:46.564 "fc17615b-4a2e-11ef-9c8e-7947904e2597" 00:13:46.564 ], 00:13:46.564 "product_name": "Malloc disk", 00:13:46.564 "block_size": 512, 00:13:46.564 "num_blocks": 65536, 00:13:46.564 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:46.564 "assigned_rate_limits": { 00:13:46.564 "rw_ios_per_sec": 0, 00:13:46.564 "rw_mbytes_per_sec": 0, 00:13:46.564 "r_mbytes_per_sec": 0, 00:13:46.564 "w_mbytes_per_sec": 0 00:13:46.564 }, 00:13:46.564 "claimed": true, 00:13:46.564 "claim_type": "exclusive_write", 00:13:46.564 "zoned": false, 00:13:46.564 "supported_io_types": { 00:13:46.564 "read": true, 00:13:46.564 "write": true, 00:13:46.564 "unmap": true, 00:13:46.564 "flush": true, 00:13:46.564 "reset": true, 00:13:46.564 "nvme_admin": false, 00:13:46.564 "nvme_io": false, 00:13:46.564 "nvme_io_md": false, 00:13:46.564 "write_zeroes": true, 00:13:46.564 "zcopy": true, 00:13:46.564 "get_zone_info": false, 00:13:46.564 "zone_management": false, 00:13:46.564 "zone_append": false, 00:13:46.564 "compare": false, 00:13:46.564 "compare_and_write": false, 00:13:46.564 "abort": true, 00:13:46.564 "seek_hole": false, 00:13:46.564 "seek_data": false, 00:13:46.564 "copy": true, 00:13:46.564 "nvme_iov_md": false 00:13:46.564 }, 00:13:46.564 "memory_domains": [ 00:13:46.564 { 00:13:46.564 "dma_device_id": "system", 00:13:46.564 "dma_device_type": 1 00:13:46.564 }, 00:13:46.564 { 00:13:46.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.564 "dma_device_type": 2 00:13:46.564 } 00:13:46.564 ], 00:13:46.564 "driver_specific": {} 00:13:46.564 } 00:13:46.564 ] 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.564 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.823 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.823 "name": "Existed_Raid", 00:13:46.823 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:46.823 "strip_size_kb": 0, 00:13:46.823 "state": "configuring", 00:13:46.823 "raid_level": "raid1", 00:13:46.823 "superblock": true, 00:13:46.823 "num_base_bdevs": 4, 00:13:46.823 "num_base_bdevs_discovered": 3, 00:13:46.823 "num_base_bdevs_operational": 4, 00:13:46.823 "base_bdevs_list": [ 00:13:46.823 { 00:13:46.823 "name": "BaseBdev1", 00:13:46.823 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:46.823 "is_configured": true, 00:13:46.823 "data_offset": 2048, 00:13:46.823 "data_size": 63488 00:13:46.823 }, 00:13:46.823 { 00:13:46.823 "name": null, 00:13:46.823 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:46.823 "is_configured": false, 00:13:46.823 "data_offset": 2048, 00:13:46.823 "data_size": 63488 00:13:46.823 }, 00:13:46.823 { 00:13:46.823 "name": "BaseBdev3", 00:13:46.823 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:46.823 "is_configured": true, 00:13:46.823 "data_offset": 2048, 00:13:46.823 "data_size": 63488 00:13:46.823 }, 00:13:46.823 { 00:13:46.823 "name": "BaseBdev4", 00:13:46.823 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:46.823 "is_configured": true, 00:13:46.823 "data_offset": 2048, 00:13:46.823 "data_size": 63488 00:13:46.823 } 00:13:46.823 ] 00:13:46.823 }' 00:13:46.823 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.823 02:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.083 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.083 02:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:47.342 [2024-07-25 02:38:34.199741] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.342 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.602 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.602 "name": "Existed_Raid", 00:13:47.602 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:47.602 "strip_size_kb": 0, 00:13:47.602 "state": "configuring", 00:13:47.602 "raid_level": "raid1", 00:13:47.602 "superblock": true, 00:13:47.602 "num_base_bdevs": 4, 00:13:47.602 "num_base_bdevs_discovered": 2, 00:13:47.602 "num_base_bdevs_operational": 4, 00:13:47.602 "base_bdevs_list": [ 00:13:47.602 { 00:13:47.602 "name": "BaseBdev1", 00:13:47.602 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:47.602 "is_configured": true, 00:13:47.602 "data_offset": 2048, 00:13:47.602 "data_size": 63488 00:13:47.602 }, 00:13:47.602 { 00:13:47.602 "name": null, 00:13:47.602 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:47.602 "is_configured": false, 00:13:47.602 "data_offset": 2048, 00:13:47.602 "data_size": 63488 00:13:47.602 }, 00:13:47.602 { 00:13:47.602 "name": null, 00:13:47.602 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:47.602 "is_configured": false, 00:13:47.602 "data_offset": 2048, 00:13:47.602 "data_size": 63488 00:13:47.602 }, 00:13:47.602 { 00:13:47.602 "name": "BaseBdev4", 00:13:47.602 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:47.602 "is_configured": true, 00:13:47.602 "data_offset": 2048, 00:13:47.602 "data_size": 63488 00:13:47.602 } 00:13:47.602 ] 00:13:47.602 }' 00:13:47.602 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.602 02:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.862 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.862 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.121 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:48.121 02:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:48.382 [2024-07-25 02:38:35.047821] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.382 "name": "Existed_Raid", 00:13:48.382 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:48.382 "strip_size_kb": 0, 00:13:48.382 "state": "configuring", 00:13:48.382 "raid_level": "raid1", 00:13:48.382 "superblock": true, 00:13:48.382 "num_base_bdevs": 4, 00:13:48.382 "num_base_bdevs_discovered": 3, 00:13:48.382 "num_base_bdevs_operational": 4, 00:13:48.382 "base_bdevs_list": [ 00:13:48.382 { 00:13:48.382 "name": "BaseBdev1", 00:13:48.382 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:48.382 "is_configured": true, 00:13:48.382 "data_offset": 2048, 00:13:48.382 "data_size": 63488 00:13:48.382 }, 00:13:48.382 { 00:13:48.382 "name": null, 00:13:48.382 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:48.382 "is_configured": false, 00:13:48.382 "data_offset": 2048, 00:13:48.382 "data_size": 63488 00:13:48.382 }, 00:13:48.382 { 00:13:48.382 "name": "BaseBdev3", 00:13:48.382 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:48.382 "is_configured": true, 00:13:48.382 "data_offset": 2048, 00:13:48.382 "data_size": 63488 00:13:48.382 }, 00:13:48.382 { 00:13:48.382 "name": "BaseBdev4", 00:13:48.382 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:48.382 "is_configured": true, 00:13:48.382 "data_offset": 2048, 00:13:48.382 "data_size": 63488 00:13:48.382 } 00:13:48.382 ] 00:13:48.382 }' 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.382 02:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.642 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.642 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.901 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:48.901 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:49.162 [2024-07-25 02:38:35.855900] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.162 02:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.422 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.422 "name": "Existed_Raid", 00:13:49.422 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:49.422 "strip_size_kb": 0, 00:13:49.422 "state": "configuring", 00:13:49.422 "raid_level": "raid1", 00:13:49.422 "superblock": true, 00:13:49.422 "num_base_bdevs": 4, 00:13:49.422 "num_base_bdevs_discovered": 2, 00:13:49.422 "num_base_bdevs_operational": 4, 00:13:49.422 "base_bdevs_list": [ 00:13:49.422 { 00:13:49.422 "name": null, 00:13:49.422 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:49.422 "is_configured": false, 00:13:49.422 "data_offset": 2048, 00:13:49.422 "data_size": 63488 00:13:49.422 }, 00:13:49.422 { 00:13:49.422 "name": null, 00:13:49.422 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:49.422 "is_configured": false, 00:13:49.422 "data_offset": 2048, 00:13:49.422 "data_size": 63488 00:13:49.422 }, 00:13:49.422 { 00:13:49.422 "name": "BaseBdev3", 00:13:49.422 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:49.422 "is_configured": true, 00:13:49.422 "data_offset": 2048, 00:13:49.422 "data_size": 63488 00:13:49.422 }, 00:13:49.422 { 00:13:49.422 "name": "BaseBdev4", 00:13:49.422 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:49.422 "is_configured": true, 00:13:49.422 "data_offset": 2048, 00:13:49.422 "data_size": 63488 00:13:49.422 } 00:13:49.422 ] 00:13:49.422 }' 00:13:49.422 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.422 02:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.681 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.681 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:49.681 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:49.681 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:49.939 [2024-07-25 02:38:36.704749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.939 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.199 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.199 "name": "Existed_Raid", 00:13:50.199 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:50.199 "strip_size_kb": 0, 00:13:50.199 "state": "configuring", 00:13:50.199 "raid_level": "raid1", 00:13:50.199 "superblock": true, 00:13:50.199 "num_base_bdevs": 4, 00:13:50.199 "num_base_bdevs_discovered": 3, 00:13:50.199 "num_base_bdevs_operational": 4, 00:13:50.199 "base_bdevs_list": [ 00:13:50.199 { 00:13:50.199 "name": null, 00:13:50.199 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:50.199 "is_configured": false, 00:13:50.199 "data_offset": 2048, 00:13:50.199 "data_size": 63488 00:13:50.199 }, 00:13:50.199 { 00:13:50.199 "name": "BaseBdev2", 00:13:50.199 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:50.199 "is_configured": true, 00:13:50.199 "data_offset": 2048, 00:13:50.199 "data_size": 63488 00:13:50.199 }, 00:13:50.199 { 00:13:50.199 "name": "BaseBdev3", 00:13:50.199 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:50.199 "is_configured": true, 00:13:50.199 "data_offset": 2048, 00:13:50.199 "data_size": 63488 00:13:50.199 }, 00:13:50.199 { 00:13:50.199 "name": "BaseBdev4", 00:13:50.199 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:50.199 "is_configured": true, 00:13:50.199 "data_offset": 2048, 00:13:50.199 "data_size": 63488 00:13:50.199 } 00:13:50.199 ] 00:13:50.199 }' 00:13:50.199 02:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.199 02:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.459 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.459 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:50.718 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:50.719 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.719 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:50.719 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fc17615b-4a2e-11ef-9c8e-7947904e2597 00:13:50.978 [2024-07-25 02:38:37.744931] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:50.978 [2024-07-25 02:38:37.744970] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x3cc643434f00 00:13:50.978 [2024-07-25 02:38:37.744973] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.978 [2024-07-25 02:38:37.744989] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3cc643497e20 00:13:50.978 [2024-07-25 02:38:37.745020] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3cc643434f00 00:13:50.978 [2024-07-25 02:38:37.745022] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3cc643434f00 00:13:50.978 [2024-07-25 02:38:37.745036] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.978 NewBaseBdev 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:50.978 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.238 02:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:51.238 [ 00:13:51.238 { 00:13:51.238 "name": "NewBaseBdev", 00:13:51.238 "aliases": [ 00:13:51.238 "fc17615b-4a2e-11ef-9c8e-7947904e2597" 00:13:51.238 ], 00:13:51.238 "product_name": "Malloc disk", 00:13:51.238 "block_size": 512, 00:13:51.238 "num_blocks": 65536, 00:13:51.238 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:51.238 "assigned_rate_limits": { 00:13:51.238 "rw_ios_per_sec": 0, 00:13:51.238 "rw_mbytes_per_sec": 0, 00:13:51.238 "r_mbytes_per_sec": 0, 00:13:51.238 "w_mbytes_per_sec": 0 00:13:51.238 }, 00:13:51.238 "claimed": true, 00:13:51.238 "claim_type": "exclusive_write", 00:13:51.238 "zoned": false, 00:13:51.238 "supported_io_types": { 00:13:51.238 "read": true, 00:13:51.238 "write": true, 00:13:51.238 "unmap": true, 00:13:51.238 "flush": true, 00:13:51.238 "reset": true, 00:13:51.238 "nvme_admin": false, 00:13:51.238 "nvme_io": false, 00:13:51.238 "nvme_io_md": false, 00:13:51.238 "write_zeroes": true, 00:13:51.238 "zcopy": true, 00:13:51.238 "get_zone_info": false, 00:13:51.238 "zone_management": false, 00:13:51.238 "zone_append": false, 00:13:51.238 "compare": false, 00:13:51.238 "compare_and_write": false, 00:13:51.238 "abort": true, 00:13:51.238 "seek_hole": false, 00:13:51.238 "seek_data": false, 00:13:51.238 "copy": true, 00:13:51.238 "nvme_iov_md": false 00:13:51.238 }, 00:13:51.238 "memory_domains": [ 00:13:51.238 { 00:13:51.238 "dma_device_id": "system", 00:13:51.238 "dma_device_type": 1 00:13:51.238 }, 00:13:51.238 { 00:13:51.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.238 "dma_device_type": 2 00:13:51.238 } 00:13:51.238 ], 00:13:51.238 "driver_specific": {} 00:13:51.238 } 00:13:51.238 ] 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.238 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.498 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.498 "name": "Existed_Raid", 00:13:51.498 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:51.498 "strip_size_kb": 0, 00:13:51.498 "state": "online", 00:13:51.498 "raid_level": "raid1", 00:13:51.498 "superblock": true, 00:13:51.498 "num_base_bdevs": 4, 00:13:51.498 "num_base_bdevs_discovered": 4, 00:13:51.498 "num_base_bdevs_operational": 4, 00:13:51.498 "base_bdevs_list": [ 00:13:51.498 { 00:13:51.498 "name": "NewBaseBdev", 00:13:51.498 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev2", 00:13:51.498 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev3", 00:13:51.498 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 }, 00:13:51.498 { 00:13:51.498 "name": "BaseBdev4", 00:13:51.498 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:51.498 "is_configured": true, 00:13:51.498 "data_offset": 2048, 00:13:51.498 "data_size": 63488 00:13:51.498 } 00:13:51.498 ] 00:13:51.498 }' 00:13:51.498 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.498 02:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:51.758 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:52.018 [2024-07-25 02:38:38.788960] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:52.018 "name": "Existed_Raid", 00:13:52.018 "aliases": [ 00:13:52.018 "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597" 00:13:52.018 ], 00:13:52.018 "product_name": "Raid Volume", 00:13:52.018 "block_size": 512, 00:13:52.018 "num_blocks": 63488, 00:13:52.018 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "assigned_rate_limits": { 00:13:52.018 "rw_ios_per_sec": 0, 00:13:52.018 "rw_mbytes_per_sec": 0, 00:13:52.018 "r_mbytes_per_sec": 0, 00:13:52.018 "w_mbytes_per_sec": 0 00:13:52.018 }, 00:13:52.018 "claimed": false, 00:13:52.018 "zoned": false, 00:13:52.018 "supported_io_types": { 00:13:52.018 "read": true, 00:13:52.018 "write": true, 00:13:52.018 "unmap": false, 00:13:52.018 "flush": false, 00:13:52.018 "reset": true, 00:13:52.018 "nvme_admin": false, 00:13:52.018 "nvme_io": false, 00:13:52.018 "nvme_io_md": false, 00:13:52.018 "write_zeroes": true, 00:13:52.018 "zcopy": false, 00:13:52.018 "get_zone_info": false, 00:13:52.018 "zone_management": false, 00:13:52.018 "zone_append": false, 00:13:52.018 "compare": false, 00:13:52.018 "compare_and_write": false, 00:13:52.018 "abort": false, 00:13:52.018 "seek_hole": false, 00:13:52.018 "seek_data": false, 00:13:52.018 "copy": false, 00:13:52.018 "nvme_iov_md": false 00:13:52.018 }, 00:13:52.018 "memory_domains": [ 00:13:52.018 { 00:13:52.018 "dma_device_id": "system", 00:13:52.018 "dma_device_type": 1 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.018 "dma_device_type": 2 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "system", 00:13:52.018 "dma_device_type": 1 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.018 "dma_device_type": 2 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "system", 00:13:52.018 "dma_device_type": 1 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.018 "dma_device_type": 2 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "system", 00:13:52.018 "dma_device_type": 1 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.018 "dma_device_type": 2 00:13:52.018 } 00:13:52.018 ], 00:13:52.018 "driver_specific": { 00:13:52.018 "raid": { 00:13:52.018 "uuid": "fb2f6cd0-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "strip_size_kb": 0, 00:13:52.018 "state": "online", 00:13:52.018 "raid_level": "raid1", 00:13:52.018 "superblock": true, 00:13:52.018 "num_base_bdevs": 4, 00:13:52.018 "num_base_bdevs_discovered": 4, 00:13:52.018 "num_base_bdevs_operational": 4, 00:13:52.018 "base_bdevs_list": [ 00:13:52.018 { 00:13:52.018 "name": "NewBaseBdev", 00:13:52.018 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "is_configured": true, 00:13:52.018 "data_offset": 2048, 00:13:52.018 "data_size": 63488 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "name": "BaseBdev2", 00:13:52.018 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "is_configured": true, 00:13:52.018 "data_offset": 2048, 00:13:52.018 "data_size": 63488 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "name": "BaseBdev3", 00:13:52.018 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "is_configured": true, 00:13:52.018 "data_offset": 2048, 00:13:52.018 "data_size": 63488 00:13:52.018 }, 00:13:52.018 { 00:13:52.018 "name": "BaseBdev4", 00:13:52.018 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:52.018 "is_configured": true, 00:13:52.018 "data_offset": 2048, 00:13:52.018 "data_size": 63488 00:13:52.018 } 00:13:52.018 ] 00:13:52.018 } 00:13:52.018 } 00:13:52.018 }' 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:52.018 BaseBdev2 00:13:52.018 BaseBdev3 00:13:52.018 BaseBdev4' 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:52.018 02:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.278 "name": "NewBaseBdev", 00:13:52.278 "aliases": [ 00:13:52.278 "fc17615b-4a2e-11ef-9c8e-7947904e2597" 00:13:52.278 ], 00:13:52.278 "product_name": "Malloc disk", 00:13:52.278 "block_size": 512, 00:13:52.278 "num_blocks": 65536, 00:13:52.278 "uuid": "fc17615b-4a2e-11ef-9c8e-7947904e2597", 00:13:52.278 "assigned_rate_limits": { 00:13:52.278 "rw_ios_per_sec": 0, 00:13:52.278 "rw_mbytes_per_sec": 0, 00:13:52.278 "r_mbytes_per_sec": 0, 00:13:52.278 "w_mbytes_per_sec": 0 00:13:52.278 }, 00:13:52.278 "claimed": true, 00:13:52.278 "claim_type": "exclusive_write", 00:13:52.278 "zoned": false, 00:13:52.278 "supported_io_types": { 00:13:52.278 "read": true, 00:13:52.278 "write": true, 00:13:52.278 "unmap": true, 00:13:52.278 "flush": true, 00:13:52.278 "reset": true, 00:13:52.278 "nvme_admin": false, 00:13:52.278 "nvme_io": false, 00:13:52.278 "nvme_io_md": false, 00:13:52.278 "write_zeroes": true, 00:13:52.278 "zcopy": true, 00:13:52.278 "get_zone_info": false, 00:13:52.278 "zone_management": false, 00:13:52.278 "zone_append": false, 00:13:52.278 "compare": false, 00:13:52.278 "compare_and_write": false, 00:13:52.278 "abort": true, 00:13:52.278 "seek_hole": false, 00:13:52.278 "seek_data": false, 00:13:52.278 "copy": true, 00:13:52.278 "nvme_iov_md": false 00:13:52.278 }, 00:13:52.278 "memory_domains": [ 00:13:52.278 { 00:13:52.278 "dma_device_id": "system", 00:13:52.278 "dma_device_type": 1 00:13:52.278 }, 00:13:52.278 { 00:13:52.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.278 "dma_device_type": 2 00:13:52.278 } 00:13:52.278 ], 00:13:52.278 "driver_specific": {} 00:13:52.278 }' 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:52.278 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.538 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.538 "name": "BaseBdev2", 00:13:52.538 "aliases": [ 00:13:52.538 "fa3528d9-4a2e-11ef-9c8e-7947904e2597" 00:13:52.538 ], 00:13:52.538 "product_name": "Malloc disk", 00:13:52.538 "block_size": 512, 00:13:52.538 "num_blocks": 65536, 00:13:52.538 "uuid": "fa3528d9-4a2e-11ef-9c8e-7947904e2597", 00:13:52.538 "assigned_rate_limits": { 00:13:52.538 "rw_ios_per_sec": 0, 00:13:52.538 "rw_mbytes_per_sec": 0, 00:13:52.538 "r_mbytes_per_sec": 0, 00:13:52.538 "w_mbytes_per_sec": 0 00:13:52.538 }, 00:13:52.538 "claimed": true, 00:13:52.538 "claim_type": "exclusive_write", 00:13:52.538 "zoned": false, 00:13:52.538 "supported_io_types": { 00:13:52.538 "read": true, 00:13:52.538 "write": true, 00:13:52.538 "unmap": true, 00:13:52.538 "flush": true, 00:13:52.538 "reset": true, 00:13:52.538 "nvme_admin": false, 00:13:52.538 "nvme_io": false, 00:13:52.538 "nvme_io_md": false, 00:13:52.538 "write_zeroes": true, 00:13:52.538 "zcopy": true, 00:13:52.538 "get_zone_info": false, 00:13:52.538 "zone_management": false, 00:13:52.538 "zone_append": false, 00:13:52.538 "compare": false, 00:13:52.538 "compare_and_write": false, 00:13:52.538 "abort": true, 00:13:52.538 "seek_hole": false, 00:13:52.538 "seek_data": false, 00:13:52.538 "copy": true, 00:13:52.538 "nvme_iov_md": false 00:13:52.538 }, 00:13:52.538 "memory_domains": [ 00:13:52.538 { 00:13:52.538 "dma_device_id": "system", 00:13:52.538 "dma_device_type": 1 00:13:52.538 }, 00:13:52.538 { 00:13:52.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.538 "dma_device_type": 2 00:13:52.538 } 00:13:52.538 ], 00:13:52.538 "driver_specific": {} 00:13:52.538 }' 00:13:52.538 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.538 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.538 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.538 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:52.539 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.798 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.798 "name": "BaseBdev3", 00:13:52.798 "aliases": [ 00:13:52.798 "fa882ca7-4a2e-11ef-9c8e-7947904e2597" 00:13:52.798 ], 00:13:52.798 "product_name": "Malloc disk", 00:13:52.798 "block_size": 512, 00:13:52.798 "num_blocks": 65536, 00:13:52.798 "uuid": "fa882ca7-4a2e-11ef-9c8e-7947904e2597", 00:13:52.798 "assigned_rate_limits": { 00:13:52.798 "rw_ios_per_sec": 0, 00:13:52.798 "rw_mbytes_per_sec": 0, 00:13:52.798 "r_mbytes_per_sec": 0, 00:13:52.798 "w_mbytes_per_sec": 0 00:13:52.798 }, 00:13:52.798 "claimed": true, 00:13:52.798 "claim_type": "exclusive_write", 00:13:52.798 "zoned": false, 00:13:52.798 "supported_io_types": { 00:13:52.798 "read": true, 00:13:52.799 "write": true, 00:13:52.799 "unmap": true, 00:13:52.799 "flush": true, 00:13:52.799 "reset": true, 00:13:52.799 "nvme_admin": false, 00:13:52.799 "nvme_io": false, 00:13:52.799 "nvme_io_md": false, 00:13:52.799 "write_zeroes": true, 00:13:52.799 "zcopy": true, 00:13:52.799 "get_zone_info": false, 00:13:52.799 "zone_management": false, 00:13:52.799 "zone_append": false, 00:13:52.799 "compare": false, 00:13:52.799 "compare_and_write": false, 00:13:52.799 "abort": true, 00:13:52.799 "seek_hole": false, 00:13:52.799 "seek_data": false, 00:13:52.799 "copy": true, 00:13:52.799 "nvme_iov_md": false 00:13:52.799 }, 00:13:52.799 "memory_domains": [ 00:13:52.799 { 00:13:52.799 "dma_device_id": "system", 00:13:52.799 "dma_device_type": 1 00:13:52.799 }, 00:13:52.799 { 00:13:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.799 "dma_device_type": 2 00:13:52.799 } 00:13:52.799 ], 00:13:52.799 "driver_specific": {} 00:13:52.799 }' 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:52.799 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:53.058 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:53.058 "name": "BaseBdev4", 00:13:53.058 "aliases": [ 00:13:53.058 "fadd051c-4a2e-11ef-9c8e-7947904e2597" 00:13:53.058 ], 00:13:53.058 "product_name": "Malloc disk", 00:13:53.058 "block_size": 512, 00:13:53.058 "num_blocks": 65536, 00:13:53.058 "uuid": "fadd051c-4a2e-11ef-9c8e-7947904e2597", 00:13:53.058 "assigned_rate_limits": { 00:13:53.058 "rw_ios_per_sec": 0, 00:13:53.059 "rw_mbytes_per_sec": 0, 00:13:53.059 "r_mbytes_per_sec": 0, 00:13:53.059 "w_mbytes_per_sec": 0 00:13:53.059 }, 00:13:53.059 "claimed": true, 00:13:53.059 "claim_type": "exclusive_write", 00:13:53.059 "zoned": false, 00:13:53.059 "supported_io_types": { 00:13:53.059 "read": true, 00:13:53.059 "write": true, 00:13:53.059 "unmap": true, 00:13:53.059 "flush": true, 00:13:53.059 "reset": true, 00:13:53.059 "nvme_admin": false, 00:13:53.059 "nvme_io": false, 00:13:53.059 "nvme_io_md": false, 00:13:53.059 "write_zeroes": true, 00:13:53.059 "zcopy": true, 00:13:53.059 "get_zone_info": false, 00:13:53.059 "zone_management": false, 00:13:53.059 "zone_append": false, 00:13:53.059 "compare": false, 00:13:53.059 "compare_and_write": false, 00:13:53.059 "abort": true, 00:13:53.059 "seek_hole": false, 00:13:53.059 "seek_data": false, 00:13:53.059 "copy": true, 00:13:53.059 "nvme_iov_md": false 00:13:53.059 }, 00:13:53.059 "memory_domains": [ 00:13:53.059 { 00:13:53.059 "dma_device_id": "system", 00:13:53.059 "dma_device_type": 1 00:13:53.059 }, 00:13:53.059 { 00:13:53.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.059 "dma_device_type": 2 00:13:53.059 } 00:13:53.059 ], 00:13:53.059 "driver_specific": {} 00:13:53.059 }' 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:53.059 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:53.319 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:53.319 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:53.319 02:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:53.319 [2024-07-25 02:38:40.149055] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.319 [2024-07-25 02:38:40.149068] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.319 [2024-07-25 02:38:40.149081] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.319 [2024-07-25 02:38:40.149145] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.319 [2024-07-25 02:38:40.149148] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3cc643434f00 name Existed_Raid, state offline 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63324 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63324 ']' 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63324 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63324 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:53.319 killing process with pid 63324 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63324' 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63324 00:13:53.319 [2024-07-25 02:38:40.192084] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.319 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63324 00:13:53.319 [2024-07-25 02:38:40.210828] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.580 02:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:53.580 00:13:53.580 real 0m20.978s 00:13:53.580 user 0m37.512s 00:13:53.580 sys 0m3.884s 00:13:53.580 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.580 02:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 ************************************ 00:13:53.580 END TEST raid_state_function_test_sb 00:13:53.580 ************************************ 00:13:53.580 02:38:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:53.580 02:38:40 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:53.580 02:38:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:53.580 02:38:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.580 02:38:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 ************************************ 00:13:53.580 START TEST raid_superblock_test 00:13:53.580 ************************************ 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64118 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64118 /var/tmp/spdk-raid.sock 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64118 ']' 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.580 02:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.580 [2024-07-25 02:38:40.472328] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:13:53.580 [2024-07-25 02:38:40.472645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:54.520 EAL: TSC is not safe to use in SMP mode 00:13:54.520 EAL: TSC is not invariant 00:13:54.520 [2024-07-25 02:38:41.211312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.520 [2024-07-25 02:38:41.304124] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:54.520 [2024-07-25 02:38:41.305775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.520 [2024-07-25 02:38:41.306334] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.520 [2024-07-25 02:38:41.306347] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:54.520 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:54.780 malloc1 00:13:54.780 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:55.040 [2024-07-25 02:38:41.705166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:55.040 [2024-07-25 02:38:41.705198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.040 [2024-07-25 02:38:41.705205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234780 00:13:55.040 [2024-07-25 02:38:41.705212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.040 [2024-07-25 02:38:41.705765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.040 [2024-07-25 02:38:41.705794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:55.040 pt1 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:55.040 malloc2 00:13:55.040 02:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.300 [2024-07-25 02:38:42.057202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.300 [2024-07-25 02:38:42.057233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.300 [2024-07-25 02:38:42.057240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234c80 00:13:55.300 [2024-07-25 02:38:42.057246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.300 [2024-07-25 02:38:42.057567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.300 [2024-07-25 02:38:42.057591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.300 pt2 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:55.300 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:55.560 malloc3 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:55.560 [2024-07-25 02:38:42.417235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:55.560 [2024-07-25 02:38:42.417274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.560 [2024-07-25 02:38:42.417281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235180 00:13:55.560 [2024-07-25 02:38:42.417287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.560 [2024-07-25 02:38:42.417726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.560 [2024-07-25 02:38:42.417753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:55.560 pt3 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:55.560 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:55.820 malloc4 00:13:55.820 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:56.079 [2024-07-25 02:38:42.793263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:56.079 [2024-07-25 02:38:42.793300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.079 [2024-07-25 02:38:42.793308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235680 00:13:56.079 [2024-07-25 02:38:42.793314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.079 [2024-07-25 02:38:42.793725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.079 [2024-07-25 02:38:42.793751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:56.079 pt4 00:13:56.079 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:56.079 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:56.080 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:56.080 [2024-07-25 02:38:42.977284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:56.080 [2024-07-25 02:38:42.977511] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.080 [2024-07-25 02:38:42.977523] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.080 [2024-07-25 02:38:42.977531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:56.080 [2024-07-25 02:38:42.977573] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7df80235900 00:13:56.080 [2024-07-25 02:38:42.977579] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.080 [2024-07-25 02:38:42.977601] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7df80297e20 00:13:56.080 [2024-07-25 02:38:42.977642] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7df80235900 00:13:56.080 [2024-07-25 02:38:42.977647] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7df80235900 00:13:56.080 [2024-07-25 02:38:42.977665] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.339 02:38:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.339 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.339 "name": "raid_bdev1", 00:13:56.339 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:13:56.339 "strip_size_kb": 0, 00:13:56.339 "state": "online", 00:13:56.339 "raid_level": "raid1", 00:13:56.339 "superblock": true, 00:13:56.339 "num_base_bdevs": 4, 00:13:56.339 "num_base_bdevs_discovered": 4, 00:13:56.339 "num_base_bdevs_operational": 4, 00:13:56.339 "base_bdevs_list": [ 00:13:56.339 { 00:13:56.339 "name": "pt1", 00:13:56.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.339 "is_configured": true, 00:13:56.339 "data_offset": 2048, 00:13:56.339 "data_size": 63488 00:13:56.339 }, 00:13:56.339 { 00:13:56.339 "name": "pt2", 00:13:56.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.339 "is_configured": true, 00:13:56.339 "data_offset": 2048, 00:13:56.339 "data_size": 63488 00:13:56.339 }, 00:13:56.339 { 00:13:56.339 "name": "pt3", 00:13:56.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.339 "is_configured": true, 00:13:56.339 "data_offset": 2048, 00:13:56.339 "data_size": 63488 00:13:56.339 }, 00:13:56.339 { 00:13:56.339 "name": "pt4", 00:13:56.339 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.339 "is_configured": true, 00:13:56.339 "data_offset": 2048, 00:13:56.339 "data_size": 63488 00:13:56.339 } 00:13:56.339 ] 00:13:56.339 }' 00:13:56.339 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.339 02:38:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:56.608 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:56.870 [2024-07-25 02:38:43.629364] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.870 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:56.870 "name": "raid_bdev1", 00:13:56.870 "aliases": [ 00:13:56.870 "020803fc-4a2f-11ef-9c8e-7947904e2597" 00:13:56.870 ], 00:13:56.870 "product_name": "Raid Volume", 00:13:56.870 "block_size": 512, 00:13:56.870 "num_blocks": 63488, 00:13:56.870 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:13:56.870 "assigned_rate_limits": { 00:13:56.870 "rw_ios_per_sec": 0, 00:13:56.870 "rw_mbytes_per_sec": 0, 00:13:56.870 "r_mbytes_per_sec": 0, 00:13:56.870 "w_mbytes_per_sec": 0 00:13:56.870 }, 00:13:56.870 "claimed": false, 00:13:56.870 "zoned": false, 00:13:56.870 "supported_io_types": { 00:13:56.870 "read": true, 00:13:56.870 "write": true, 00:13:56.870 "unmap": false, 00:13:56.870 "flush": false, 00:13:56.870 "reset": true, 00:13:56.870 "nvme_admin": false, 00:13:56.870 "nvme_io": false, 00:13:56.870 "nvme_io_md": false, 00:13:56.870 "write_zeroes": true, 00:13:56.870 "zcopy": false, 00:13:56.870 "get_zone_info": false, 00:13:56.870 "zone_management": false, 00:13:56.870 "zone_append": false, 00:13:56.870 "compare": false, 00:13:56.870 "compare_and_write": false, 00:13:56.870 "abort": false, 00:13:56.870 "seek_hole": false, 00:13:56.870 "seek_data": false, 00:13:56.870 "copy": false, 00:13:56.870 "nvme_iov_md": false 00:13:56.870 }, 00:13:56.870 "memory_domains": [ 00:13:56.870 { 00:13:56.870 "dma_device_id": "system", 00:13:56.870 "dma_device_type": 1 00:13:56.870 }, 00:13:56.870 { 00:13:56.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.870 "dma_device_type": 2 00:13:56.870 }, 00:13:56.870 { 00:13:56.870 "dma_device_id": "system", 00:13:56.870 "dma_device_type": 1 00:13:56.870 }, 00:13:56.870 { 00:13:56.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.870 "dma_device_type": 2 00:13:56.870 }, 00:13:56.870 { 00:13:56.871 "dma_device_id": "system", 00:13:56.871 "dma_device_type": 1 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.871 "dma_device_type": 2 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "dma_device_id": "system", 00:13:56.871 "dma_device_type": 1 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.871 "dma_device_type": 2 00:13:56.871 } 00:13:56.871 ], 00:13:56.871 "driver_specific": { 00:13:56.871 "raid": { 00:13:56.871 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:13:56.871 "strip_size_kb": 0, 00:13:56.871 "state": "online", 00:13:56.871 "raid_level": "raid1", 00:13:56.871 "superblock": true, 00:13:56.871 "num_base_bdevs": 4, 00:13:56.871 "num_base_bdevs_discovered": 4, 00:13:56.871 "num_base_bdevs_operational": 4, 00:13:56.871 "base_bdevs_list": [ 00:13:56.871 { 00:13:56.871 "name": "pt1", 00:13:56.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:56.871 "is_configured": true, 00:13:56.871 "data_offset": 2048, 00:13:56.871 "data_size": 63488 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "name": "pt2", 00:13:56.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.871 "is_configured": true, 00:13:56.871 "data_offset": 2048, 00:13:56.871 "data_size": 63488 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "name": "pt3", 00:13:56.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.871 "is_configured": true, 00:13:56.871 "data_offset": 2048, 00:13:56.871 "data_size": 63488 00:13:56.871 }, 00:13:56.871 { 00:13:56.871 "name": "pt4", 00:13:56.871 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:56.871 "is_configured": true, 00:13:56.871 "data_offset": 2048, 00:13:56.871 "data_size": 63488 00:13:56.871 } 00:13:56.871 ] 00:13:56.871 } 00:13:56.871 } 00:13:56.871 }' 00:13:56.871 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:56.871 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:56.871 pt2 00:13:56.871 pt3 00:13:56.871 pt4' 00:13:56.871 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:56.871 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:56.871 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.130 "name": "pt1", 00:13:57.130 "aliases": [ 00:13:57.130 "00000000-0000-0000-0000-000000000001" 00:13:57.130 ], 00:13:57.130 "product_name": "passthru", 00:13:57.130 "block_size": 512, 00:13:57.130 "num_blocks": 65536, 00:13:57.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:57.130 "assigned_rate_limits": { 00:13:57.130 "rw_ios_per_sec": 0, 00:13:57.130 "rw_mbytes_per_sec": 0, 00:13:57.130 "r_mbytes_per_sec": 0, 00:13:57.130 "w_mbytes_per_sec": 0 00:13:57.130 }, 00:13:57.130 "claimed": true, 00:13:57.130 "claim_type": "exclusive_write", 00:13:57.130 "zoned": false, 00:13:57.130 "supported_io_types": { 00:13:57.130 "read": true, 00:13:57.130 "write": true, 00:13:57.130 "unmap": true, 00:13:57.130 "flush": true, 00:13:57.130 "reset": true, 00:13:57.130 "nvme_admin": false, 00:13:57.130 "nvme_io": false, 00:13:57.130 "nvme_io_md": false, 00:13:57.130 "write_zeroes": true, 00:13:57.130 "zcopy": true, 00:13:57.130 "get_zone_info": false, 00:13:57.130 "zone_management": false, 00:13:57.130 "zone_append": false, 00:13:57.130 "compare": false, 00:13:57.130 "compare_and_write": false, 00:13:57.130 "abort": true, 00:13:57.130 "seek_hole": false, 00:13:57.130 "seek_data": false, 00:13:57.130 "copy": true, 00:13:57.130 "nvme_iov_md": false 00:13:57.130 }, 00:13:57.130 "memory_domains": [ 00:13:57.130 { 00:13:57.130 "dma_device_id": "system", 00:13:57.130 "dma_device_type": 1 00:13:57.130 }, 00:13:57.130 { 00:13:57.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.130 "dma_device_type": 2 00:13:57.130 } 00:13:57.130 ], 00:13:57.130 "driver_specific": { 00:13:57.130 "passthru": { 00:13:57.130 "name": "pt1", 00:13:57.130 "base_bdev_name": "malloc1" 00:13:57.130 } 00:13:57.130 } 00:13:57.130 }' 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.130 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.131 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:57.131 02:38:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.390 "name": "pt2", 00:13:57.390 "aliases": [ 00:13:57.390 "00000000-0000-0000-0000-000000000002" 00:13:57.390 ], 00:13:57.390 "product_name": "passthru", 00:13:57.390 "block_size": 512, 00:13:57.390 "num_blocks": 65536, 00:13:57.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.390 "assigned_rate_limits": { 00:13:57.390 "rw_ios_per_sec": 0, 00:13:57.390 "rw_mbytes_per_sec": 0, 00:13:57.390 "r_mbytes_per_sec": 0, 00:13:57.390 "w_mbytes_per_sec": 0 00:13:57.390 }, 00:13:57.390 "claimed": true, 00:13:57.390 "claim_type": "exclusive_write", 00:13:57.390 "zoned": false, 00:13:57.390 "supported_io_types": { 00:13:57.390 "read": true, 00:13:57.390 "write": true, 00:13:57.390 "unmap": true, 00:13:57.390 "flush": true, 00:13:57.390 "reset": true, 00:13:57.390 "nvme_admin": false, 00:13:57.390 "nvme_io": false, 00:13:57.390 "nvme_io_md": false, 00:13:57.390 "write_zeroes": true, 00:13:57.390 "zcopy": true, 00:13:57.390 "get_zone_info": false, 00:13:57.390 "zone_management": false, 00:13:57.390 "zone_append": false, 00:13:57.390 "compare": false, 00:13:57.390 "compare_and_write": false, 00:13:57.390 "abort": true, 00:13:57.390 "seek_hole": false, 00:13:57.390 "seek_data": false, 00:13:57.390 "copy": true, 00:13:57.390 "nvme_iov_md": false 00:13:57.390 }, 00:13:57.390 "memory_domains": [ 00:13:57.390 { 00:13:57.390 "dma_device_id": "system", 00:13:57.390 "dma_device_type": 1 00:13:57.390 }, 00:13:57.390 { 00:13:57.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.390 "dma_device_type": 2 00:13:57.390 } 00:13:57.390 ], 00:13:57.390 "driver_specific": { 00:13:57.390 "passthru": { 00:13:57.390 "name": "pt2", 00:13:57.390 "base_bdev_name": "malloc2" 00:13:57.390 } 00:13:57.390 } 00:13:57.390 }' 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:57.390 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.649 "name": "pt3", 00:13:57.649 "aliases": [ 00:13:57.649 "00000000-0000-0000-0000-000000000003" 00:13:57.649 ], 00:13:57.649 "product_name": "passthru", 00:13:57.649 "block_size": 512, 00:13:57.649 "num_blocks": 65536, 00:13:57.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.649 "assigned_rate_limits": { 00:13:57.649 "rw_ios_per_sec": 0, 00:13:57.649 "rw_mbytes_per_sec": 0, 00:13:57.649 "r_mbytes_per_sec": 0, 00:13:57.649 "w_mbytes_per_sec": 0 00:13:57.649 }, 00:13:57.649 "claimed": true, 00:13:57.649 "claim_type": "exclusive_write", 00:13:57.649 "zoned": false, 00:13:57.649 "supported_io_types": { 00:13:57.649 "read": true, 00:13:57.649 "write": true, 00:13:57.649 "unmap": true, 00:13:57.649 "flush": true, 00:13:57.649 "reset": true, 00:13:57.649 "nvme_admin": false, 00:13:57.649 "nvme_io": false, 00:13:57.649 "nvme_io_md": false, 00:13:57.649 "write_zeroes": true, 00:13:57.649 "zcopy": true, 00:13:57.649 "get_zone_info": false, 00:13:57.649 "zone_management": false, 00:13:57.649 "zone_append": false, 00:13:57.649 "compare": false, 00:13:57.649 "compare_and_write": false, 00:13:57.649 "abort": true, 00:13:57.649 "seek_hole": false, 00:13:57.649 "seek_data": false, 00:13:57.649 "copy": true, 00:13:57.649 "nvme_iov_md": false 00:13:57.649 }, 00:13:57.649 "memory_domains": [ 00:13:57.649 { 00:13:57.649 "dma_device_id": "system", 00:13:57.649 "dma_device_type": 1 00:13:57.649 }, 00:13:57.649 { 00:13:57.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.649 "dma_device_type": 2 00:13:57.649 } 00:13:57.649 ], 00:13:57.649 "driver_specific": { 00:13:57.649 "passthru": { 00:13:57.649 "name": "pt3", 00:13:57.649 "base_bdev_name": "malloc3" 00:13:57.649 } 00:13:57.649 } 00:13:57.649 }' 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:57.649 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.908 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.908 "name": "pt4", 00:13:57.908 "aliases": [ 00:13:57.908 "00000000-0000-0000-0000-000000000004" 00:13:57.908 ], 00:13:57.908 "product_name": "passthru", 00:13:57.908 "block_size": 512, 00:13:57.908 "num_blocks": 65536, 00:13:57.908 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:57.908 "assigned_rate_limits": { 00:13:57.908 "rw_ios_per_sec": 0, 00:13:57.908 "rw_mbytes_per_sec": 0, 00:13:57.908 "r_mbytes_per_sec": 0, 00:13:57.908 "w_mbytes_per_sec": 0 00:13:57.908 }, 00:13:57.908 "claimed": true, 00:13:57.908 "claim_type": "exclusive_write", 00:13:57.908 "zoned": false, 00:13:57.908 "supported_io_types": { 00:13:57.908 "read": true, 00:13:57.908 "write": true, 00:13:57.908 "unmap": true, 00:13:57.908 "flush": true, 00:13:57.908 "reset": true, 00:13:57.908 "nvme_admin": false, 00:13:57.908 "nvme_io": false, 00:13:57.908 "nvme_io_md": false, 00:13:57.908 "write_zeroes": true, 00:13:57.908 "zcopy": true, 00:13:57.908 "get_zone_info": false, 00:13:57.908 "zone_management": false, 00:13:57.908 "zone_append": false, 00:13:57.908 "compare": false, 00:13:57.908 "compare_and_write": false, 00:13:57.908 "abort": true, 00:13:57.909 "seek_hole": false, 00:13:57.909 "seek_data": false, 00:13:57.909 "copy": true, 00:13:57.909 "nvme_iov_md": false 00:13:57.909 }, 00:13:57.909 "memory_domains": [ 00:13:57.909 { 00:13:57.909 "dma_device_id": "system", 00:13:57.909 "dma_device_type": 1 00:13:57.909 }, 00:13:57.909 { 00:13:57.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.909 "dma_device_type": 2 00:13:57.909 } 00:13:57.909 ], 00:13:57.909 "driver_specific": { 00:13:57.909 "passthru": { 00:13:57.909 "name": "pt4", 00:13:57.909 "base_bdev_name": "malloc4" 00:13:57.909 } 00:13:57.909 } 00:13:57.909 }' 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:57.909 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:58.167 [2024-07-25 02:38:44.941462] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.167 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=020803fc-4a2f-11ef-9c8e-7947904e2597 00:13:58.167 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 020803fc-4a2f-11ef-9c8e-7947904e2597 ']' 00:13:58.167 02:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:58.425 [2024-07-25 02:38:45.137455] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.425 [2024-07-25 02:38:45.137465] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.425 [2024-07-25 02:38:45.137479] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.425 [2024-07-25 02:38:45.137495] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.425 [2024-07-25 02:38:45.137498] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80235900 name raid_bdev1, state offline 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.425 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:58.683 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.683 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:58.942 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.942 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:59.201 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:59.202 02:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:59.202 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:59.202 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:59.461 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:59.720 [2024-07-25 02:38:46.389576] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:59.720 [2024-07-25 02:38:46.390016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:59.720 [2024-07-25 02:38:46.390033] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:59.720 [2024-07-25 02:38:46.390040] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:59.720 [2024-07-25 02:38:46.390051] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:59.720 [2024-07-25 02:38:46.390078] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:59.720 [2024-07-25 02:38:46.390087] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:59.720 [2024-07-25 02:38:46.390094] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:59.720 [2024-07-25 02:38:46.390101] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.720 [2024-07-25 02:38:46.390105] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80235680 name raid_bdev1, state configuring 00:13:59.720 request: 00:13:59.720 { 00:13:59.720 "name": "raid_bdev1", 00:13:59.720 "raid_level": "raid1", 00:13:59.720 "base_bdevs": [ 00:13:59.720 "malloc1", 00:13:59.720 "malloc2", 00:13:59.720 "malloc3", 00:13:59.720 "malloc4" 00:13:59.720 ], 00:13:59.720 "superblock": false, 00:13:59.720 "method": "bdev_raid_create", 00:13:59.720 "req_id": 1 00:13:59.720 } 00:13:59.720 Got JSON-RPC error response 00:13:59.720 response: 00:13:59.720 { 00:13:59.720 "code": -17, 00:13:59.720 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:59.720 } 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:59.720 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:59.980 [2024-07-25 02:38:46.777600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:59.980 [2024-07-25 02:38:46.777624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.980 [2024-07-25 02:38:46.777632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235180 00:13:59.980 [2024-07-25 02:38:46.777637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.980 [2024-07-25 02:38:46.777925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.980 [2024-07-25 02:38:46.777944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:59.980 [2024-07-25 02:38:46.777959] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:59.980 [2024-07-25 02:38:46.777966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:59.980 pt1 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.980 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.238 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.238 "name": "raid_bdev1", 00:14:00.238 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:00.238 "strip_size_kb": 0, 00:14:00.238 "state": "configuring", 00:14:00.238 "raid_level": "raid1", 00:14:00.238 "superblock": true, 00:14:00.238 "num_base_bdevs": 4, 00:14:00.238 "num_base_bdevs_discovered": 1, 00:14:00.238 "num_base_bdevs_operational": 4, 00:14:00.238 "base_bdevs_list": [ 00:14:00.238 { 00:14:00.238 "name": "pt1", 00:14:00.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.238 "is_configured": true, 00:14:00.238 "data_offset": 2048, 00:14:00.238 "data_size": 63488 00:14:00.238 }, 00:14:00.238 { 00:14:00.238 "name": null, 00:14:00.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.238 "is_configured": false, 00:14:00.238 "data_offset": 2048, 00:14:00.238 "data_size": 63488 00:14:00.238 }, 00:14:00.238 { 00:14:00.238 "name": null, 00:14:00.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.238 "is_configured": false, 00:14:00.238 "data_offset": 2048, 00:14:00.238 "data_size": 63488 00:14:00.238 }, 00:14:00.238 { 00:14:00.238 "name": null, 00:14:00.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.238 "is_configured": false, 00:14:00.238 "data_offset": 2048, 00:14:00.238 "data_size": 63488 00:14:00.238 } 00:14:00.238 ] 00:14:00.238 }' 00:14:00.238 02:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.238 02:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.497 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:14:00.497 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:00.756 [2024-07-25 02:38:47.433657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:00.756 [2024-07-25 02:38:47.433683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.756 [2024-07-25 02:38:47.433690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234780 00:14:00.756 [2024-07-25 02:38:47.433696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.756 [2024-07-25 02:38:47.433747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.756 [2024-07-25 02:38:47.433753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:00.756 [2024-07-25 02:38:47.433765] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:00.756 [2024-07-25 02:38:47.433771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:00.756 pt2 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:00.756 [2024-07-25 02:38:47.625674] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.756 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.015 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.015 "name": "raid_bdev1", 00:14:01.015 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:01.015 "strip_size_kb": 0, 00:14:01.015 "state": "configuring", 00:14:01.015 "raid_level": "raid1", 00:14:01.015 "superblock": true, 00:14:01.015 "num_base_bdevs": 4, 00:14:01.015 "num_base_bdevs_discovered": 1, 00:14:01.015 "num_base_bdevs_operational": 4, 00:14:01.015 "base_bdevs_list": [ 00:14:01.015 { 00:14:01.015 "name": "pt1", 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:01.015 "is_configured": true, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 } 00:14:01.015 ] 00:14:01.015 }' 00:14:01.015 02:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.015 02:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.275 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:01.275 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:01.275 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.534 [2024-07-25 02:38:48.281732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.534 [2024-07-25 02:38:48.281759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.534 [2024-07-25 02:38:48.281766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234780 00:14:01.534 [2024-07-25 02:38:48.281772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.534 [2024-07-25 02:38:48.281835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.534 [2024-07-25 02:38:48.281841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.534 [2024-07-25 02:38:48.281854] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.534 [2024-07-25 02:38:48.281859] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.534 pt2 00:14:01.534 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:01.534 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:01.534 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:01.793 [2024-07-25 02:38:48.473754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:01.793 [2024-07-25 02:38:48.473785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.793 [2024-07-25 02:38:48.473808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235b80 00:14:01.793 [2024-07-25 02:38:48.473814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.793 [2024-07-25 02:38:48.473883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.793 [2024-07-25 02:38:48.473890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:01.793 [2024-07-25 02:38:48.473905] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:01.793 [2024-07-25 02:38:48.473910] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:01.793 pt3 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:01.793 [2024-07-25 02:38:48.653761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:01.793 [2024-07-25 02:38:48.653781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.793 [2024-07-25 02:38:48.653788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235900 00:14:01.793 [2024-07-25 02:38:48.653793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.793 [2024-07-25 02:38:48.653836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.793 [2024-07-25 02:38:48.653842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:01.793 [2024-07-25 02:38:48.653852] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:01.793 [2024-07-25 02:38:48.653857] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:01.793 [2024-07-25 02:38:48.653876] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7df80234c80 00:14:01.793 [2024-07-25 02:38:48.653880] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:01.793 [2024-07-25 02:38:48.653895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7df80297e20 00:14:01.793 [2024-07-25 02:38:48.653930] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7df80234c80 00:14:01.793 [2024-07-25 02:38:48.653933] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7df80234c80 00:14:01.793 [2024-07-25 02:38:48.653947] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.793 pt4 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.793 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.052 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.052 "name": "raid_bdev1", 00:14:02.052 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:02.052 "strip_size_kb": 0, 00:14:02.052 "state": "online", 00:14:02.052 "raid_level": "raid1", 00:14:02.052 "superblock": true, 00:14:02.052 "num_base_bdevs": 4, 00:14:02.052 "num_base_bdevs_discovered": 4, 00:14:02.052 "num_base_bdevs_operational": 4, 00:14:02.052 "base_bdevs_list": [ 00:14:02.052 { 00:14:02.052 "name": "pt1", 00:14:02.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.052 "is_configured": true, 00:14:02.052 "data_offset": 2048, 00:14:02.052 "data_size": 63488 00:14:02.052 }, 00:14:02.052 { 00:14:02.052 "name": "pt2", 00:14:02.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.052 "is_configured": true, 00:14:02.052 "data_offset": 2048, 00:14:02.052 "data_size": 63488 00:14:02.052 }, 00:14:02.052 { 00:14:02.052 "name": "pt3", 00:14:02.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.052 "is_configured": true, 00:14:02.052 "data_offset": 2048, 00:14:02.052 "data_size": 63488 00:14:02.052 }, 00:14:02.052 { 00:14:02.052 "name": "pt4", 00:14:02.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.052 "is_configured": true, 00:14:02.052 "data_offset": 2048, 00:14:02.052 "data_size": 63488 00:14:02.052 } 00:14:02.052 ] 00:14:02.052 }' 00:14:02.052 02:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.052 02:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:02.312 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.571 [2024-07-25 02:38:49.293854] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.571 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.571 "name": "raid_bdev1", 00:14:02.571 "aliases": [ 00:14:02.571 "020803fc-4a2f-11ef-9c8e-7947904e2597" 00:14:02.571 ], 00:14:02.571 "product_name": "Raid Volume", 00:14:02.571 "block_size": 512, 00:14:02.571 "num_blocks": 63488, 00:14:02.571 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:02.571 "assigned_rate_limits": { 00:14:02.571 "rw_ios_per_sec": 0, 00:14:02.571 "rw_mbytes_per_sec": 0, 00:14:02.571 "r_mbytes_per_sec": 0, 00:14:02.571 "w_mbytes_per_sec": 0 00:14:02.571 }, 00:14:02.571 "claimed": false, 00:14:02.571 "zoned": false, 00:14:02.571 "supported_io_types": { 00:14:02.571 "read": true, 00:14:02.571 "write": true, 00:14:02.571 "unmap": false, 00:14:02.571 "flush": false, 00:14:02.571 "reset": true, 00:14:02.571 "nvme_admin": false, 00:14:02.571 "nvme_io": false, 00:14:02.571 "nvme_io_md": false, 00:14:02.571 "write_zeroes": true, 00:14:02.571 "zcopy": false, 00:14:02.571 "get_zone_info": false, 00:14:02.571 "zone_management": false, 00:14:02.571 "zone_append": false, 00:14:02.571 "compare": false, 00:14:02.571 "compare_and_write": false, 00:14:02.571 "abort": false, 00:14:02.571 "seek_hole": false, 00:14:02.571 "seek_data": false, 00:14:02.571 "copy": false, 00:14:02.571 "nvme_iov_md": false 00:14:02.571 }, 00:14:02.571 "memory_domains": [ 00:14:02.571 { 00:14:02.571 "dma_device_id": "system", 00:14:02.571 "dma_device_type": 1 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.571 "dma_device_type": 2 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "system", 00:14:02.571 "dma_device_type": 1 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.571 "dma_device_type": 2 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "system", 00:14:02.571 "dma_device_type": 1 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.571 "dma_device_type": 2 00:14:02.571 }, 00:14:02.571 { 00:14:02.571 "dma_device_id": "system", 00:14:02.571 "dma_device_type": 1 00:14:02.571 }, 00:14:02.572 { 00:14:02.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.572 "dma_device_type": 2 00:14:02.572 } 00:14:02.572 ], 00:14:02.572 "driver_specific": { 00:14:02.572 "raid": { 00:14:02.572 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:02.572 "strip_size_kb": 0, 00:14:02.572 "state": "online", 00:14:02.572 "raid_level": "raid1", 00:14:02.572 "superblock": true, 00:14:02.572 "num_base_bdevs": 4, 00:14:02.572 "num_base_bdevs_discovered": 4, 00:14:02.572 "num_base_bdevs_operational": 4, 00:14:02.572 "base_bdevs_list": [ 00:14:02.572 { 00:14:02.572 "name": "pt1", 00:14:02.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.572 "is_configured": true, 00:14:02.572 "data_offset": 2048, 00:14:02.572 "data_size": 63488 00:14:02.572 }, 00:14:02.572 { 00:14:02.572 "name": "pt2", 00:14:02.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.572 "is_configured": true, 00:14:02.572 "data_offset": 2048, 00:14:02.572 "data_size": 63488 00:14:02.572 }, 00:14:02.572 { 00:14:02.572 "name": "pt3", 00:14:02.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.572 "is_configured": true, 00:14:02.572 "data_offset": 2048, 00:14:02.572 "data_size": 63488 00:14:02.572 }, 00:14:02.572 { 00:14:02.572 "name": "pt4", 00:14:02.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.572 "is_configured": true, 00:14:02.572 "data_offset": 2048, 00:14:02.572 "data_size": 63488 00:14:02.572 } 00:14:02.572 ] 00:14:02.572 } 00:14:02.572 } 00:14:02.572 }' 00:14:02.572 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.572 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:02.572 pt2 00:14:02.572 pt3 00:14:02.572 pt4' 00:14:02.572 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.572 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:02.572 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.838 "name": "pt1", 00:14:02.838 "aliases": [ 00:14:02.838 "00000000-0000-0000-0000-000000000001" 00:14:02.838 ], 00:14:02.838 "product_name": "passthru", 00:14:02.838 "block_size": 512, 00:14:02.838 "num_blocks": 65536, 00:14:02.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:02.838 "assigned_rate_limits": { 00:14:02.838 "rw_ios_per_sec": 0, 00:14:02.838 "rw_mbytes_per_sec": 0, 00:14:02.838 "r_mbytes_per_sec": 0, 00:14:02.838 "w_mbytes_per_sec": 0 00:14:02.838 }, 00:14:02.838 "claimed": true, 00:14:02.838 "claim_type": "exclusive_write", 00:14:02.838 "zoned": false, 00:14:02.838 "supported_io_types": { 00:14:02.838 "read": true, 00:14:02.838 "write": true, 00:14:02.838 "unmap": true, 00:14:02.838 "flush": true, 00:14:02.838 "reset": true, 00:14:02.838 "nvme_admin": false, 00:14:02.838 "nvme_io": false, 00:14:02.838 "nvme_io_md": false, 00:14:02.838 "write_zeroes": true, 00:14:02.838 "zcopy": true, 00:14:02.838 "get_zone_info": false, 00:14:02.838 "zone_management": false, 00:14:02.838 "zone_append": false, 00:14:02.838 "compare": false, 00:14:02.838 "compare_and_write": false, 00:14:02.838 "abort": true, 00:14:02.838 "seek_hole": false, 00:14:02.838 "seek_data": false, 00:14:02.838 "copy": true, 00:14:02.838 "nvme_iov_md": false 00:14:02.838 }, 00:14:02.838 "memory_domains": [ 00:14:02.838 { 00:14:02.838 "dma_device_id": "system", 00:14:02.838 "dma_device_type": 1 00:14:02.838 }, 00:14:02.838 { 00:14:02.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.838 "dma_device_type": 2 00:14:02.838 } 00:14:02.838 ], 00:14:02.838 "driver_specific": { 00:14:02.838 "passthru": { 00:14:02.838 "name": "pt1", 00:14:02.838 "base_bdev_name": "malloc1" 00:14:02.838 } 00:14:02.838 } 00:14:02.838 }' 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:02.838 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.120 "name": "pt2", 00:14:03.120 "aliases": [ 00:14:03.120 "00000000-0000-0000-0000-000000000002" 00:14:03.120 ], 00:14:03.120 "product_name": "passthru", 00:14:03.120 "block_size": 512, 00:14:03.120 "num_blocks": 65536, 00:14:03.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:03.120 "assigned_rate_limits": { 00:14:03.120 "rw_ios_per_sec": 0, 00:14:03.120 "rw_mbytes_per_sec": 0, 00:14:03.120 "r_mbytes_per_sec": 0, 00:14:03.120 "w_mbytes_per_sec": 0 00:14:03.120 }, 00:14:03.120 "claimed": true, 00:14:03.120 "claim_type": "exclusive_write", 00:14:03.120 "zoned": false, 00:14:03.120 "supported_io_types": { 00:14:03.120 "read": true, 00:14:03.120 "write": true, 00:14:03.120 "unmap": true, 00:14:03.120 "flush": true, 00:14:03.120 "reset": true, 00:14:03.120 "nvme_admin": false, 00:14:03.120 "nvme_io": false, 00:14:03.120 "nvme_io_md": false, 00:14:03.120 "write_zeroes": true, 00:14:03.120 "zcopy": true, 00:14:03.120 "get_zone_info": false, 00:14:03.120 "zone_management": false, 00:14:03.120 "zone_append": false, 00:14:03.120 "compare": false, 00:14:03.120 "compare_and_write": false, 00:14:03.120 "abort": true, 00:14:03.120 "seek_hole": false, 00:14:03.120 "seek_data": false, 00:14:03.120 "copy": true, 00:14:03.120 "nvme_iov_md": false 00:14:03.120 }, 00:14:03.120 "memory_domains": [ 00:14:03.120 { 00:14:03.120 "dma_device_id": "system", 00:14:03.120 "dma_device_type": 1 00:14:03.120 }, 00:14:03.120 { 00:14:03.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.120 "dma_device_type": 2 00:14:03.120 } 00:14:03.120 ], 00:14:03.120 "driver_specific": { 00:14:03.120 "passthru": { 00:14:03.120 "name": "pt2", 00:14:03.120 "base_bdev_name": "malloc2" 00:14:03.120 } 00:14:03.120 } 00:14:03.120 }' 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:03.120 02:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.405 "name": "pt3", 00:14:03.405 "aliases": [ 00:14:03.405 "00000000-0000-0000-0000-000000000003" 00:14:03.405 ], 00:14:03.405 "product_name": "passthru", 00:14:03.405 "block_size": 512, 00:14:03.405 "num_blocks": 65536, 00:14:03.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:03.405 "assigned_rate_limits": { 00:14:03.405 "rw_ios_per_sec": 0, 00:14:03.405 "rw_mbytes_per_sec": 0, 00:14:03.405 "r_mbytes_per_sec": 0, 00:14:03.405 "w_mbytes_per_sec": 0 00:14:03.405 }, 00:14:03.405 "claimed": true, 00:14:03.405 "claim_type": "exclusive_write", 00:14:03.405 "zoned": false, 00:14:03.405 "supported_io_types": { 00:14:03.405 "read": true, 00:14:03.405 "write": true, 00:14:03.405 "unmap": true, 00:14:03.405 "flush": true, 00:14:03.405 "reset": true, 00:14:03.405 "nvme_admin": false, 00:14:03.405 "nvme_io": false, 00:14:03.405 "nvme_io_md": false, 00:14:03.405 "write_zeroes": true, 00:14:03.405 "zcopy": true, 00:14:03.405 "get_zone_info": false, 00:14:03.405 "zone_management": false, 00:14:03.405 "zone_append": false, 00:14:03.405 "compare": false, 00:14:03.405 "compare_and_write": false, 00:14:03.405 "abort": true, 00:14:03.405 "seek_hole": false, 00:14:03.405 "seek_data": false, 00:14:03.405 "copy": true, 00:14:03.405 "nvme_iov_md": false 00:14:03.405 }, 00:14:03.405 "memory_domains": [ 00:14:03.405 { 00:14:03.405 "dma_device_id": "system", 00:14:03.405 "dma_device_type": 1 00:14:03.405 }, 00:14:03.405 { 00:14:03.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.405 "dma_device_type": 2 00:14:03.405 } 00:14:03.405 ], 00:14:03.405 "driver_specific": { 00:14:03.405 "passthru": { 00:14:03.405 "name": "pt3", 00:14:03.405 "base_bdev_name": "malloc3" 00:14:03.405 } 00:14:03.405 } 00:14:03.405 }' 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:03.405 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.673 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.673 "name": "pt4", 00:14:03.673 "aliases": [ 00:14:03.673 "00000000-0000-0000-0000-000000000004" 00:14:03.673 ], 00:14:03.673 "product_name": "passthru", 00:14:03.673 "block_size": 512, 00:14:03.673 "num_blocks": 65536, 00:14:03.673 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:03.673 "assigned_rate_limits": { 00:14:03.673 "rw_ios_per_sec": 0, 00:14:03.673 "rw_mbytes_per_sec": 0, 00:14:03.673 "r_mbytes_per_sec": 0, 00:14:03.673 "w_mbytes_per_sec": 0 00:14:03.673 }, 00:14:03.673 "claimed": true, 00:14:03.673 "claim_type": "exclusive_write", 00:14:03.673 "zoned": false, 00:14:03.673 "supported_io_types": { 00:14:03.673 "read": true, 00:14:03.673 "write": true, 00:14:03.673 "unmap": true, 00:14:03.674 "flush": true, 00:14:03.674 "reset": true, 00:14:03.674 "nvme_admin": false, 00:14:03.674 "nvme_io": false, 00:14:03.674 "nvme_io_md": false, 00:14:03.674 "write_zeroes": true, 00:14:03.674 "zcopy": true, 00:14:03.674 "get_zone_info": false, 00:14:03.674 "zone_management": false, 00:14:03.674 "zone_append": false, 00:14:03.674 "compare": false, 00:14:03.674 "compare_and_write": false, 00:14:03.674 "abort": true, 00:14:03.674 "seek_hole": false, 00:14:03.674 "seek_data": false, 00:14:03.674 "copy": true, 00:14:03.674 "nvme_iov_md": false 00:14:03.674 }, 00:14:03.674 "memory_domains": [ 00:14:03.674 { 00:14:03.674 "dma_device_id": "system", 00:14:03.674 "dma_device_type": 1 00:14:03.674 }, 00:14:03.674 { 00:14:03.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.674 "dma_device_type": 2 00:14:03.674 } 00:14:03.674 ], 00:14:03.674 "driver_specific": { 00:14:03.674 "passthru": { 00:14:03.674 "name": "pt4", 00:14:03.674 "base_bdev_name": "malloc4" 00:14:03.674 } 00:14:03.674 } 00:14:03.674 }' 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:03.674 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:03.932 [2024-07-25 02:38:50.681962] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.932 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 020803fc-4a2f-11ef-9c8e-7947904e2597 '!=' 020803fc-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:03.932 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:03.932 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:03.932 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:03.932 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:04.191 [2024-07-25 02:38:50.881960] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.191 02:38:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.450 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.450 "name": "raid_bdev1", 00:14:04.450 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:04.450 "strip_size_kb": 0, 00:14:04.450 "state": "online", 00:14:04.450 "raid_level": "raid1", 00:14:04.450 "superblock": true, 00:14:04.450 "num_base_bdevs": 4, 00:14:04.450 "num_base_bdevs_discovered": 3, 00:14:04.450 "num_base_bdevs_operational": 3, 00:14:04.450 "base_bdevs_list": [ 00:14:04.450 { 00:14:04.450 "name": null, 00:14:04.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.450 "is_configured": false, 00:14:04.450 "data_offset": 2048, 00:14:04.450 "data_size": 63488 00:14:04.450 }, 00:14:04.450 { 00:14:04.450 "name": "pt2", 00:14:04.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.450 "is_configured": true, 00:14:04.450 "data_offset": 2048, 00:14:04.450 "data_size": 63488 00:14:04.450 }, 00:14:04.450 { 00:14:04.450 "name": "pt3", 00:14:04.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.450 "is_configured": true, 00:14:04.450 "data_offset": 2048, 00:14:04.450 "data_size": 63488 00:14:04.450 }, 00:14:04.450 { 00:14:04.450 "name": "pt4", 00:14:04.450 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:04.450 "is_configured": true, 00:14:04.450 "data_offset": 2048, 00:14:04.450 "data_size": 63488 00:14:04.450 } 00:14:04.450 ] 00:14:04.450 }' 00:14:04.450 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.450 02:38:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.710 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.710 [2024-07-25 02:38:51.542013] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.710 [2024-07-25 02:38:51.542028] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.710 [2024-07-25 02:38:51.542044] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.710 [2024-07-25 02:38:51.542056] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.710 [2024-07-25 02:38:51.542059] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80234c80 name raid_bdev1, state offline 00:14:04.710 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.710 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:04.969 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:04.970 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:04.970 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:04.970 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:04.970 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:05.229 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:05.229 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:05.229 02:38:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:05.229 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:05.229 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:05.229 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:05.489 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:05.489 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:05.489 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:05.489 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:05.489 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.748 [2024-07-25 02:38:52.442127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.748 [2024-07-25 02:38:52.442163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.748 [2024-07-25 02:38:52.442171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235900 00:14:05.748 [2024-07-25 02:38:52.442177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.748 [2024-07-25 02:38:52.442676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.748 [2024-07-25 02:38:52.442701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.748 [2024-07-25 02:38:52.442718] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:05.748 [2024-07-25 02:38:52.442727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.748 pt2 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.748 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.749 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.749 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.749 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.008 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.008 "name": "raid_bdev1", 00:14:06.008 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:06.008 "strip_size_kb": 0, 00:14:06.008 "state": "configuring", 00:14:06.008 "raid_level": "raid1", 00:14:06.008 "superblock": true, 00:14:06.008 "num_base_bdevs": 4, 00:14:06.008 "num_base_bdevs_discovered": 1, 00:14:06.008 "num_base_bdevs_operational": 3, 00:14:06.008 "base_bdevs_list": [ 00:14:06.008 { 00:14:06.008 "name": null, 00:14:06.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.008 "is_configured": false, 00:14:06.008 "data_offset": 2048, 00:14:06.008 "data_size": 63488 00:14:06.008 }, 00:14:06.008 { 00:14:06.008 "name": "pt2", 00:14:06.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.008 "is_configured": true, 00:14:06.008 "data_offset": 2048, 00:14:06.008 "data_size": 63488 00:14:06.008 }, 00:14:06.008 { 00:14:06.008 "name": null, 00:14:06.008 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.008 "is_configured": false, 00:14:06.008 "data_offset": 2048, 00:14:06.008 "data_size": 63488 00:14:06.008 }, 00:14:06.008 { 00:14:06.008 "name": null, 00:14:06.008 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.008 "is_configured": false, 00:14:06.008 "data_offset": 2048, 00:14:06.008 "data_size": 63488 00:14:06.008 } 00:14:06.008 ] 00:14:06.008 }' 00:14:06.008 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.008 02:38:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.268 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:06.268 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:06.268 02:38:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.268 [2024-07-25 02:38:53.078196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.268 [2024-07-25 02:38:53.078229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.268 [2024-07-25 02:38:53.078238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235680 00:14:06.268 [2024-07-25 02:38:53.078244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.268 [2024-07-25 02:38:53.078312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.268 [2024-07-25 02:38:53.078318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.268 [2024-07-25 02:38:53.078332] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:06.268 [2024-07-25 02:38:53.078338] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.268 pt3 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.268 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.528 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.528 "name": "raid_bdev1", 00:14:06.528 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:06.528 "strip_size_kb": 0, 00:14:06.528 "state": "configuring", 00:14:06.528 "raid_level": "raid1", 00:14:06.528 "superblock": true, 00:14:06.528 "num_base_bdevs": 4, 00:14:06.528 "num_base_bdevs_discovered": 2, 00:14:06.528 "num_base_bdevs_operational": 3, 00:14:06.528 "base_bdevs_list": [ 00:14:06.528 { 00:14:06.528 "name": null, 00:14:06.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.528 "is_configured": false, 00:14:06.528 "data_offset": 2048, 00:14:06.528 "data_size": 63488 00:14:06.528 }, 00:14:06.528 { 00:14:06.528 "name": "pt2", 00:14:06.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.528 "is_configured": true, 00:14:06.528 "data_offset": 2048, 00:14:06.528 "data_size": 63488 00:14:06.528 }, 00:14:06.528 { 00:14:06.528 "name": "pt3", 00:14:06.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.528 "is_configured": true, 00:14:06.528 "data_offset": 2048, 00:14:06.528 "data_size": 63488 00:14:06.528 }, 00:14:06.528 { 00:14:06.528 "name": null, 00:14:06.528 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.528 "is_configured": false, 00:14:06.528 "data_offset": 2048, 00:14:06.528 "data_size": 63488 00:14:06.528 } 00:14:06.528 ] 00:14:06.528 }' 00:14:06.528 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.528 02:38:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.786 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:06.786 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:06.786 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:14:06.786 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.045 [2024-07-25 02:38:53.722251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.045 [2024-07-25 02:38:53.722279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.045 [2024-07-25 02:38:53.722287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234c80 00:14:07.045 [2024-07-25 02:38:53.722292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.045 [2024-07-25 02:38:53.722343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.045 [2024-07-25 02:38:53.722349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.045 [2024-07-25 02:38:53.722361] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.045 [2024-07-25 02:38:53.722366] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.045 [2024-07-25 02:38:53.722384] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7df80234780 00:14:07.045 [2024-07-25 02:38:53.722387] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.045 [2024-07-25 02:38:53.722401] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7df80297e20 00:14:07.045 [2024-07-25 02:38:53.722430] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7df80234780 00:14:07.045 [2024-07-25 02:38:53.722434] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7df80234780 00:14:07.045 [2024-07-25 02:38:53.722450] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.045 pt4 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.045 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.305 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.305 "name": "raid_bdev1", 00:14:07.305 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:07.305 "strip_size_kb": 0, 00:14:07.305 "state": "online", 00:14:07.305 "raid_level": "raid1", 00:14:07.305 "superblock": true, 00:14:07.305 "num_base_bdevs": 4, 00:14:07.305 "num_base_bdevs_discovered": 3, 00:14:07.305 "num_base_bdevs_operational": 3, 00:14:07.305 "base_bdevs_list": [ 00:14:07.305 { 00:14:07.305 "name": null, 00:14:07.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.305 "is_configured": false, 00:14:07.305 "data_offset": 2048, 00:14:07.305 "data_size": 63488 00:14:07.305 }, 00:14:07.305 { 00:14:07.305 "name": "pt2", 00:14:07.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.305 "is_configured": true, 00:14:07.305 "data_offset": 2048, 00:14:07.305 "data_size": 63488 00:14:07.305 }, 00:14:07.305 { 00:14:07.305 "name": "pt3", 00:14:07.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.305 "is_configured": true, 00:14:07.305 "data_offset": 2048, 00:14:07.305 "data_size": 63488 00:14:07.305 }, 00:14:07.305 { 00:14:07.305 "name": "pt4", 00:14:07.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.305 "is_configured": true, 00:14:07.305 "data_offset": 2048, 00:14:07.305 "data_size": 63488 00:14:07.305 } 00:14:07.305 ] 00:14:07.305 }' 00:14:07.305 02:38:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.305 02:38:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.565 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:07.565 [2024-07-25 02:38:54.394306] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.565 [2024-07-25 02:38:54.394317] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.565 [2024-07-25 02:38:54.394329] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.565 [2024-07-25 02:38:54.394339] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.565 [2024-07-25 02:38:54.394342] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80234780 name raid_bdev1, state offline 00:14:07.565 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.565 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:07.825 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:07.825 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:07.825 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:14:07.825 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:14:07.825 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:14:08.084 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.084 [2024-07-25 02:38:54.978357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.084 [2024-07-25 02:38:54.978380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.084 [2024-07-25 02:38:54.978387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80234c80 00:14:08.084 [2024-07-25 02:38:54.978392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.084 [2024-07-25 02:38:54.978807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.084 [2024-07-25 02:38:54.978832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.084 [2024-07-25 02:38:54.978847] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:08.084 [2024-07-25 02:38:54.978854] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:08.084 [2024-07-25 02:38:54.978871] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:08.084 [2024-07-25 02:38:54.978873] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.084 [2024-07-25 02:38:54.978877] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80234780 name raid_bdev1, state configuring 00:14:08.084 [2024-07-25 02:38:54.978883] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.084 [2024-07-25 02:38:54.978896] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:08.084 pt1 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.344 02:38:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.344 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.344 "name": "raid_bdev1", 00:14:08.344 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:08.344 "strip_size_kb": 0, 00:14:08.344 "state": "configuring", 00:14:08.344 "raid_level": "raid1", 00:14:08.344 "superblock": true, 00:14:08.344 "num_base_bdevs": 4, 00:14:08.344 "num_base_bdevs_discovered": 2, 00:14:08.344 "num_base_bdevs_operational": 3, 00:14:08.344 "base_bdevs_list": [ 00:14:08.344 { 00:14:08.344 "name": null, 00:14:08.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.344 "is_configured": false, 00:14:08.344 "data_offset": 2048, 00:14:08.344 "data_size": 63488 00:14:08.344 }, 00:14:08.344 { 00:14:08.344 "name": "pt2", 00:14:08.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.344 "is_configured": true, 00:14:08.344 "data_offset": 2048, 00:14:08.344 "data_size": 63488 00:14:08.344 }, 00:14:08.344 { 00:14:08.344 "name": "pt3", 00:14:08.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.344 "is_configured": true, 00:14:08.344 "data_offset": 2048, 00:14:08.344 "data_size": 63488 00:14:08.344 }, 00:14:08.344 { 00:14:08.344 "name": null, 00:14:08.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.344 "is_configured": false, 00:14:08.344 "data_offset": 2048, 00:14:08.344 "data_size": 63488 00:14:08.344 } 00:14:08.344 ] 00:14:08.344 }' 00:14:08.344 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.344 02:38:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.604 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:08.604 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:08.864 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:08.864 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:09.124 [2024-07-25 02:38:55.814448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:09.124 [2024-07-25 02:38:55.814477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.124 [2024-07-25 02:38:55.814485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7df80235180 00:14:09.124 [2024-07-25 02:38:55.814491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.124 [2024-07-25 02:38:55.814560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.124 [2024-07-25 02:38:55.814566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:09.124 [2024-07-25 02:38:55.814578] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:09.124 [2024-07-25 02:38:55.814584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:09.124 [2024-07-25 02:38:55.814601] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x7df80234780 00:14:09.124 [2024-07-25 02:38:55.814604] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:09.124 [2024-07-25 02:38:55.814618] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7df80297e20 00:14:09.124 [2024-07-25 02:38:55.814649] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7df80234780 00:14:09.124 [2024-07-25 02:38:55.814652] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7df80234780 00:14:09.124 [2024-07-25 02:38:55.814665] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.124 pt4 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.124 02:38:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.124 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.124 "name": "raid_bdev1", 00:14:09.124 "uuid": "020803fc-4a2f-11ef-9c8e-7947904e2597", 00:14:09.124 "strip_size_kb": 0, 00:14:09.124 "state": "online", 00:14:09.124 "raid_level": "raid1", 00:14:09.124 "superblock": true, 00:14:09.124 "num_base_bdevs": 4, 00:14:09.124 "num_base_bdevs_discovered": 3, 00:14:09.124 "num_base_bdevs_operational": 3, 00:14:09.124 "base_bdevs_list": [ 00:14:09.124 { 00:14:09.124 "name": null, 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.124 "is_configured": false, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": "pt2", 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.124 "is_configured": true, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": "pt3", 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.124 "is_configured": true, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 }, 00:14:09.124 { 00:14:09.124 "name": "pt4", 00:14:09.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.124 "is_configured": true, 00:14:09.124 "data_offset": 2048, 00:14:09.124 "data_size": 63488 00:14:09.124 } 00:14:09.124 ] 00:14:09.124 }' 00:14:09.124 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.124 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.694 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:09.694 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:09.694 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:09.694 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:09.694 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:09.954 [2024-07-25 02:38:56.654547] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 020803fc-4a2f-11ef-9c8e-7947904e2597 '!=' 020803fc-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64118 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64118 ']' 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64118 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64118 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:09.954 killing process with pid 64118 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64118' 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64118 00:14:09.954 [2024-07-25 02:38:56.700728] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.954 [2024-07-25 02:38:56.700743] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.954 [2024-07-25 02:38:56.700764] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.954 [2024-07-25 02:38:56.700767] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7df80234780 name raid_bdev1, state offline 00:14:09.954 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64118 00:14:09.954 [2024-07-25 02:38:56.719776] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.215 02:38:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:10.215 00:14:10.215 real 0m16.435s 00:14:10.215 user 0m28.876s 00:14:10.215 sys 0m3.381s 00:14:10.215 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.215 02:38:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.215 ************************************ 00:14:10.215 END TEST raid_superblock_test 00:14:10.215 ************************************ 00:14:10.215 02:38:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:10.215 02:38:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:10.215 02:38:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:10.215 02:38:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.215 02:38:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.215 ************************************ 00:14:10.215 START TEST raid_read_error_test 00:14:10.215 ************************************ 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.v4UGKWrdIE 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=64738 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 64738 /var/tmp/spdk-raid.sock 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 64738 ']' 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.215 02:38:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.215 [2024-07-25 02:38:56.992249] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:10.215 [2024-07-25 02:38:56.992584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:11.155 EAL: TSC is not safe to use in SMP mode 00:14:11.155 EAL: TSC is not invariant 00:14:11.155 [2024-07-25 02:38:57.727078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.155 [2024-07-25 02:38:57.816856] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:11.155 [2024-07-25 02:38:57.818491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.155 [2024-07-25 02:38:57.819042] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.155 [2024-07-25 02:38:57.819053] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.155 02:38:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.155 02:38:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:11.155 02:38:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:11.155 02:38:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.415 BaseBdev1_malloc 00:14:11.415 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:11.415 true 00:14:11.415 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:11.675 [2024-07-25 02:38:58.393871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:11.675 [2024-07-25 02:38:58.393913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.675 [2024-07-25 02:38:58.393930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2459f4234780 00:14:11.675 [2024-07-25 02:38:58.393936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.675 [2024-07-25 02:38:58.394230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.675 [2024-07-25 02:38:58.394248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.675 BaseBdev1 00:14:11.675 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:11.675 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.675 BaseBdev2_malloc 00:14:11.675 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:11.934 true 00:14:11.935 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:12.194 [2024-07-25 02:38:58.917917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:12.194 [2024-07-25 02:38:58.917943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.194 [2024-07-25 02:38:58.917954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2459f4234c80 00:14:12.194 [2024-07-25 02:38:58.917959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.194 [2024-07-25 02:38:58.918208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.194 [2024-07-25 02:38:58.918218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.194 BaseBdev2 00:14:12.194 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:12.194 02:38:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.454 BaseBdev3_malloc 00:14:12.454 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:12.454 true 00:14:12.454 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:12.714 [2024-07-25 02:38:59.477964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:12.714 [2024-07-25 02:38:59.477990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.714 [2024-07-25 02:38:59.478001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2459f4235180 00:14:12.714 [2024-07-25 02:38:59.478006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.714 [2024-07-25 02:38:59.478247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.714 [2024-07-25 02:38:59.478256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.714 BaseBdev3 00:14:12.714 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:12.714 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.975 BaseBdev4_malloc 00:14:12.975 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:12.975 true 00:14:12.975 02:38:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:13.237 [2024-07-25 02:39:00.022014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:13.237 [2024-07-25 02:39:00.022046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.237 [2024-07-25 02:39:00.022061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2459f4235680 00:14:13.237 [2024-07-25 02:39:00.022067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.237 [2024-07-25 02:39:00.022339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.237 [2024-07-25 02:39:00.022362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:13.237 BaseBdev4 00:14:13.237 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:13.496 [2024-07-25 02:39:00.202038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.496 [2024-07-25 02:39:00.202256] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.496 [2024-07-25 02:39:00.202268] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.496 [2024-07-25 02:39:00.202278] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.496 [2024-07-25 02:39:00.202324] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2459f4235900 00:14:13.496 [2024-07-25 02:39:00.202328] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:13.496 [2024-07-25 02:39:00.202349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2459f42a0e20 00:14:13.496 [2024-07-25 02:39:00.202392] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2459f4235900 00:14:13.496 [2024-07-25 02:39:00.202395] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2459f4235900 00:14:13.496 [2024-07-25 02:39:00.202410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.496 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.756 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.756 "name": "raid_bdev1", 00:14:13.756 "uuid": "0c4c4dfb-4a2f-11ef-9c8e-7947904e2597", 00:14:13.756 "strip_size_kb": 0, 00:14:13.756 "state": "online", 00:14:13.756 "raid_level": "raid1", 00:14:13.756 "superblock": true, 00:14:13.756 "num_base_bdevs": 4, 00:14:13.756 "num_base_bdevs_discovered": 4, 00:14:13.756 "num_base_bdevs_operational": 4, 00:14:13.756 "base_bdevs_list": [ 00:14:13.756 { 00:14:13.756 "name": "BaseBdev1", 00:14:13.756 "uuid": "a388d23b-7215-6956-8319-ded4cc5712d1", 00:14:13.756 "is_configured": true, 00:14:13.756 "data_offset": 2048, 00:14:13.756 "data_size": 63488 00:14:13.756 }, 00:14:13.756 { 00:14:13.756 "name": "BaseBdev2", 00:14:13.756 "uuid": "fdc325d0-0eac-fa5d-a955-454a6de271d4", 00:14:13.756 "is_configured": true, 00:14:13.756 "data_offset": 2048, 00:14:13.756 "data_size": 63488 00:14:13.756 }, 00:14:13.756 { 00:14:13.756 "name": "BaseBdev3", 00:14:13.756 "uuid": "dff48a90-1d4b-1555-b21b-2e1a8660f817", 00:14:13.756 "is_configured": true, 00:14:13.756 "data_offset": 2048, 00:14:13.756 "data_size": 63488 00:14:13.756 }, 00:14:13.756 { 00:14:13.756 "name": "BaseBdev4", 00:14:13.756 "uuid": "e4ae5330-0a69-7050-a63b-c9390d108454", 00:14:13.756 "is_configured": true, 00:14:13.756 "data_offset": 2048, 00:14:13.756 "data_size": 63488 00:14:13.756 } 00:14:13.756 ] 00:14:13.756 }' 00:14:13.756 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.756 02:39:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.020 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:14.020 02:39:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:14.020 [2024-07-25 02:39:00.774168] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2459f42a0ec0 00:14:14.961 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.221 02:39:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.481 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.481 "name": "raid_bdev1", 00:14:15.481 "uuid": "0c4c4dfb-4a2f-11ef-9c8e-7947904e2597", 00:14:15.481 "strip_size_kb": 0, 00:14:15.481 "state": "online", 00:14:15.481 "raid_level": "raid1", 00:14:15.481 "superblock": true, 00:14:15.481 "num_base_bdevs": 4, 00:14:15.481 "num_base_bdevs_discovered": 4, 00:14:15.482 "num_base_bdevs_operational": 4, 00:14:15.482 "base_bdevs_list": [ 00:14:15.482 { 00:14:15.482 "name": "BaseBdev1", 00:14:15.482 "uuid": "a388d23b-7215-6956-8319-ded4cc5712d1", 00:14:15.482 "is_configured": true, 00:14:15.482 "data_offset": 2048, 00:14:15.482 "data_size": 63488 00:14:15.482 }, 00:14:15.482 { 00:14:15.482 "name": "BaseBdev2", 00:14:15.482 "uuid": "fdc325d0-0eac-fa5d-a955-454a6de271d4", 00:14:15.482 "is_configured": true, 00:14:15.482 "data_offset": 2048, 00:14:15.482 "data_size": 63488 00:14:15.482 }, 00:14:15.482 { 00:14:15.482 "name": "BaseBdev3", 00:14:15.482 "uuid": "dff48a90-1d4b-1555-b21b-2e1a8660f817", 00:14:15.482 "is_configured": true, 00:14:15.482 "data_offset": 2048, 00:14:15.482 "data_size": 63488 00:14:15.482 }, 00:14:15.482 { 00:14:15.482 "name": "BaseBdev4", 00:14:15.482 "uuid": "e4ae5330-0a69-7050-a63b-c9390d108454", 00:14:15.482 "is_configured": true, 00:14:15.482 "data_offset": 2048, 00:14:15.482 "data_size": 63488 00:14:15.482 } 00:14:15.482 ] 00:14:15.482 }' 00:14:15.482 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.482 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:15.742 [2024-07-25 02:39:02.608355] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.742 [2024-07-25 02:39:02.608383] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.742 [2024-07-25 02:39:02.608705] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.742 [2024-07-25 02:39:02.608720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.742 [2024-07-25 02:39:02.608736] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.742 [2024-07-25 02:39:02.608740] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2459f4235900 name raid_bdev1, state offline 00:14:15.742 0 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 64738 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 64738 ']' 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 64738 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 64738 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:15.742 killing process with pid 64738 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64738' 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 64738 00:14:15.742 [2024-07-25 02:39:02.638091] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.742 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 64738 00:14:16.002 [2024-07-25 02:39:02.656705] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.v4UGKWrdIE 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:16.002 00:14:16.002 real 0m5.871s 00:14:16.002 user 0m8.688s 00:14:16.002 sys 0m1.319s 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.002 02:39:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.002 ************************************ 00:14:16.002 END TEST raid_read_error_test 00:14:16.002 ************************************ 00:14:16.002 02:39:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:16.002 02:39:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:16.002 02:39:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:16.002 02:39:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.002 02:39:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.263 ************************************ 00:14:16.263 START TEST raid_write_error_test 00:14:16.263 ************************************ 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8gXrmwcssf 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=64872 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 64872 /var/tmp/spdk-raid.sock 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 64872 ']' 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.263 02:39:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.263 [2024-07-25 02:39:02.941129] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:16.263 [2024-07-25 02:39:02.941500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:16.832 EAL: TSC is not safe to use in SMP mode 00:14:16.832 EAL: TSC is not invariant 00:14:16.832 [2024-07-25 02:39:03.685778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.091 [2024-07-25 02:39:03.777949] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:17.091 [2024-07-25 02:39:03.779653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.091 [2024-07-25 02:39:03.780215] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.091 [2024-07-25 02:39:03.780227] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.091 02:39:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.091 02:39:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:17.091 02:39:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:17.091 02:39:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.350 BaseBdev1_malloc 00:14:17.350 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:17.350 true 00:14:17.350 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:17.609 [2024-07-25 02:39:04.367164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:17.609 [2024-07-25 02:39:04.367212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.609 [2024-07-25 02:39:04.367230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bae1ec34780 00:14:17.609 [2024-07-25 02:39:04.367235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.609 [2024-07-25 02:39:04.367532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.609 [2024-07-25 02:39:04.367551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.609 BaseBdev1 00:14:17.609 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:17.609 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.868 BaseBdev2_malloc 00:14:17.868 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:17.868 true 00:14:17.868 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:18.127 [2024-07-25 02:39:04.915204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:18.127 [2024-07-25 02:39:04.915242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.127 [2024-07-25 02:39:04.915261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bae1ec34c80 00:14:18.127 [2024-07-25 02:39:04.915266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.127 [2024-07-25 02:39:04.915693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.127 [2024-07-25 02:39:04.915719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:18.127 BaseBdev2 00:14:18.127 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:18.128 02:39:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:18.387 BaseBdev3_malloc 00:14:18.387 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:18.387 true 00:14:18.647 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:18.647 [2024-07-25 02:39:05.463252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:18.647 [2024-07-25 02:39:05.463290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.647 [2024-07-25 02:39:05.463311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bae1ec35180 00:14:18.647 [2024-07-25 02:39:05.463317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.647 [2024-07-25 02:39:05.463722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.647 [2024-07-25 02:39:05.463749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:18.647 BaseBdev3 00:14:18.647 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:18.647 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:18.907 BaseBdev4_malloc 00:14:18.907 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:19.167 true 00:14:19.167 02:39:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:19.167 [2024-07-25 02:39:06.003297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:19.167 [2024-07-25 02:39:06.003335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.167 [2024-07-25 02:39:06.003355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bae1ec35680 00:14:19.167 [2024-07-25 02:39:06.003360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.167 [2024-07-25 02:39:06.003774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.167 [2024-07-25 02:39:06.003801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:19.167 BaseBdev4 00:14:19.167 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:19.426 [2024-07-25 02:39:06.187320] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.426 [2024-07-25 02:39:06.187577] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.426 [2024-07-25 02:39:06.187597] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.426 [2024-07-25 02:39:06.187608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.426 [2024-07-25 02:39:06.187658] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bae1ec35900 00:14:19.426 [2024-07-25 02:39:06.187664] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.426 [2024-07-25 02:39:06.187687] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bae1eca0e20 00:14:19.426 [2024-07-25 02:39:06.187749] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bae1ec35900 00:14:19.426 [2024-07-25 02:39:06.187753] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bae1ec35900 00:14:19.426 [2024-07-25 02:39:06.187770] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.426 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:19.426 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:19.426 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.427 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.686 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.686 "name": "raid_bdev1", 00:14:19.686 "uuid": "0fdd95ff-4a2f-11ef-9c8e-7947904e2597", 00:14:19.686 "strip_size_kb": 0, 00:14:19.686 "state": "online", 00:14:19.686 "raid_level": "raid1", 00:14:19.686 "superblock": true, 00:14:19.686 "num_base_bdevs": 4, 00:14:19.686 "num_base_bdevs_discovered": 4, 00:14:19.686 "num_base_bdevs_operational": 4, 00:14:19.686 "base_bdevs_list": [ 00:14:19.686 { 00:14:19.686 "name": "BaseBdev1", 00:14:19.686 "uuid": "881af502-ad52-2d52-bfed-1360bd130308", 00:14:19.686 "is_configured": true, 00:14:19.686 "data_offset": 2048, 00:14:19.686 "data_size": 63488 00:14:19.686 }, 00:14:19.686 { 00:14:19.686 "name": "BaseBdev2", 00:14:19.686 "uuid": "4c28ad42-794a-5a57-844b-9cb179af20be", 00:14:19.686 "is_configured": true, 00:14:19.686 "data_offset": 2048, 00:14:19.686 "data_size": 63488 00:14:19.686 }, 00:14:19.686 { 00:14:19.686 "name": "BaseBdev3", 00:14:19.686 "uuid": "3ca55f5b-11fe-0a55-a7eb-1a539430928a", 00:14:19.686 "is_configured": true, 00:14:19.686 "data_offset": 2048, 00:14:19.686 "data_size": 63488 00:14:19.686 }, 00:14:19.686 { 00:14:19.686 "name": "BaseBdev4", 00:14:19.686 "uuid": "68355ba2-e623-545e-9e27-45c7c3ac1a7e", 00:14:19.686 "is_configured": true, 00:14:19.686 "data_offset": 2048, 00:14:19.686 "data_size": 63488 00:14:19.686 } 00:14:19.686 ] 00:14:19.686 }' 00:14:19.686 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.686 02:39:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.946 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:19.946 02:39:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:19.946 [2024-07-25 02:39:06.795425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bae1eca0ec0 00:14:20.886 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:21.146 [2024-07-25 02:39:07.945454] bdev_raid.c:2248:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:21.146 [2024-07-25 02:39:07.945503] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.146 [2024-07-25 02:39:07.945631] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1bae1eca0ec0 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.147 02:39:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.407 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.407 "name": "raid_bdev1", 00:14:21.407 "uuid": "0fdd95ff-4a2f-11ef-9c8e-7947904e2597", 00:14:21.407 "strip_size_kb": 0, 00:14:21.407 "state": "online", 00:14:21.407 "raid_level": "raid1", 00:14:21.407 "superblock": true, 00:14:21.407 "num_base_bdevs": 4, 00:14:21.407 "num_base_bdevs_discovered": 3, 00:14:21.407 "num_base_bdevs_operational": 3, 00:14:21.407 "base_bdevs_list": [ 00:14:21.407 { 00:14:21.407 "name": null, 00:14:21.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.407 "is_configured": false, 00:14:21.407 "data_offset": 2048, 00:14:21.407 "data_size": 63488 00:14:21.407 }, 00:14:21.407 { 00:14:21.407 "name": "BaseBdev2", 00:14:21.407 "uuid": "4c28ad42-794a-5a57-844b-9cb179af20be", 00:14:21.407 "is_configured": true, 00:14:21.407 "data_offset": 2048, 00:14:21.407 "data_size": 63488 00:14:21.407 }, 00:14:21.407 { 00:14:21.407 "name": "BaseBdev3", 00:14:21.407 "uuid": "3ca55f5b-11fe-0a55-a7eb-1a539430928a", 00:14:21.407 "is_configured": true, 00:14:21.407 "data_offset": 2048, 00:14:21.407 "data_size": 63488 00:14:21.407 }, 00:14:21.407 { 00:14:21.407 "name": "BaseBdev4", 00:14:21.407 "uuid": "68355ba2-e623-545e-9e27-45c7c3ac1a7e", 00:14:21.407 "is_configured": true, 00:14:21.407 "data_offset": 2048, 00:14:21.407 "data_size": 63488 00:14:21.407 } 00:14:21.407 ] 00:14:21.407 }' 00:14:21.407 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.407 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.668 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:21.928 [2024-07-25 02:39:08.613194] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.928 [2024-07-25 02:39:08.613222] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.928 [2024-07-25 02:39:08.613483] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.928 [2024-07-25 02:39:08.613491] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.928 [2024-07-25 02:39:08.613506] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.928 [2024-07-25 02:39:08.613510] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bae1ec35900 name raid_bdev1, state offline 00:14:21.928 0 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 64872 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 64872 ']' 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 64872 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 64872 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:21.928 killing process with pid 64872 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64872' 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 64872 00:14:21.928 [2024-07-25 02:39:08.643255] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.928 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 64872 00:14:21.928 [2024-07-25 02:39:08.661799] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8gXrmwcssf 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:22.189 00:14:22.189 real 0m5.927s 00:14:22.189 user 0m8.714s 00:14:22.189 sys 0m1.393s 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.189 02:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 ************************************ 00:14:22.189 END TEST raid_write_error_test 00:14:22.189 ************************************ 00:14:22.189 02:39:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:22.189 02:39:08 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:14:22.189 02:39:08 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:14:22.189 02:39:08 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:14:22.189 02:39:08 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:22.189 02:39:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:22.189 02:39:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.189 02:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 ************************************ 00:14:22.189 START TEST raid_state_function_test_sb_4k 00:14:22.189 ************************************ 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65000 00:14:22.189 Process raid pid: 65000 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65000' 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65000 /var/tmp/spdk-raid.sock 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65000 ']' 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.189 02:39:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 [2024-07-25 02:39:08.933152] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:22.189 [2024-07-25 02:39:08.933437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:22.756 EAL: TSC is not safe to use in SMP mode 00:14:22.756 EAL: TSC is not invariant 00:14:22.756 [2024-07-25 02:39:09.372464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.756 [2024-07-25 02:39:09.451110] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:22.756 [2024-07-25 02:39:09.452715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.756 [2024-07-25 02:39:09.453280] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.756 [2024-07-25 02:39:09.453291] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.015 02:39:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.015 02:39:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:14:23.015 02:39:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.274 [2024-07-25 02:39:10.004146] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.274 [2024-07-25 02:39:10.004180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.274 [2024-07-25 02:39:10.004184] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.274 [2024-07-25 02:39:10.004190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.274 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.533 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.533 "name": "Existed_Raid", 00:14:23.533 "uuid": "1223fcc9-4a2f-11ef-9c8e-7947904e2597", 00:14:23.533 "strip_size_kb": 0, 00:14:23.533 "state": "configuring", 00:14:23.533 "raid_level": "raid1", 00:14:23.533 "superblock": true, 00:14:23.533 "num_base_bdevs": 2, 00:14:23.533 "num_base_bdevs_discovered": 0, 00:14:23.533 "num_base_bdevs_operational": 2, 00:14:23.533 "base_bdevs_list": [ 00:14:23.533 { 00:14:23.533 "name": "BaseBdev1", 00:14:23.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.533 "is_configured": false, 00:14:23.533 "data_offset": 0, 00:14:23.533 "data_size": 0 00:14:23.533 }, 00:14:23.533 { 00:14:23.533 "name": "BaseBdev2", 00:14:23.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.533 "is_configured": false, 00:14:23.533 "data_offset": 0, 00:14:23.533 "data_size": 0 00:14:23.533 } 00:14:23.533 ] 00:14:23.533 }' 00:14:23.533 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.533 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:23.793 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.793 [2024-07-25 02:39:10.628177] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.793 [2024-07-25 02:39:10.628193] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x87c1ec34500 name Existed_Raid, state configuring 00:14:23.793 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:24.052 [2024-07-25 02:39:10.820199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.052 [2024-07-25 02:39:10.820223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.052 [2024-07-25 02:39:10.820226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.052 [2024-07-25 02:39:10.820232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.052 02:39:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:14:24.312 [2024-07-25 02:39:11.040991] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.312 BaseBdev1 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:24.312 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.572 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.572 [ 00:14:24.572 { 00:14:24.572 "name": "BaseBdev1", 00:14:24.572 "aliases": [ 00:14:24.572 "12c214cd-4a2f-11ef-9c8e-7947904e2597" 00:14:24.572 ], 00:14:24.572 "product_name": "Malloc disk", 00:14:24.572 "block_size": 4096, 00:14:24.572 "num_blocks": 8192, 00:14:24.572 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:24.572 "assigned_rate_limits": { 00:14:24.572 "rw_ios_per_sec": 0, 00:14:24.572 "rw_mbytes_per_sec": 0, 00:14:24.572 "r_mbytes_per_sec": 0, 00:14:24.572 "w_mbytes_per_sec": 0 00:14:24.572 }, 00:14:24.572 "claimed": true, 00:14:24.572 "claim_type": "exclusive_write", 00:14:24.572 "zoned": false, 00:14:24.572 "supported_io_types": { 00:14:24.572 "read": true, 00:14:24.572 "write": true, 00:14:24.572 "unmap": true, 00:14:24.572 "flush": true, 00:14:24.572 "reset": true, 00:14:24.573 "nvme_admin": false, 00:14:24.573 "nvme_io": false, 00:14:24.573 "nvme_io_md": false, 00:14:24.573 "write_zeroes": true, 00:14:24.573 "zcopy": true, 00:14:24.573 "get_zone_info": false, 00:14:24.573 "zone_management": false, 00:14:24.573 "zone_append": false, 00:14:24.573 "compare": false, 00:14:24.573 "compare_and_write": false, 00:14:24.573 "abort": true, 00:14:24.573 "seek_hole": false, 00:14:24.573 "seek_data": false, 00:14:24.573 "copy": true, 00:14:24.573 "nvme_iov_md": false 00:14:24.573 }, 00:14:24.573 "memory_domains": [ 00:14:24.573 { 00:14:24.573 "dma_device_id": "system", 00:14:24.573 "dma_device_type": 1 00:14:24.573 }, 00:14:24.573 { 00:14:24.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.573 "dma_device_type": 2 00:14:24.573 } 00:14:24.573 ], 00:14:24.573 "driver_specific": {} 00:14:24.573 } 00:14:24.573 ] 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.573 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.833 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.833 "name": "Existed_Raid", 00:14:24.833 "uuid": "12a081ee-4a2f-11ef-9c8e-7947904e2597", 00:14:24.833 "strip_size_kb": 0, 00:14:24.833 "state": "configuring", 00:14:24.833 "raid_level": "raid1", 00:14:24.833 "superblock": true, 00:14:24.833 "num_base_bdevs": 2, 00:14:24.833 "num_base_bdevs_discovered": 1, 00:14:24.833 "num_base_bdevs_operational": 2, 00:14:24.833 "base_bdevs_list": [ 00:14:24.833 { 00:14:24.833 "name": "BaseBdev1", 00:14:24.833 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:24.833 "is_configured": true, 00:14:24.833 "data_offset": 256, 00:14:24.833 "data_size": 7936 00:14:24.833 }, 00:14:24.833 { 00:14:24.833 "name": "BaseBdev2", 00:14:24.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.833 "is_configured": false, 00:14:24.833 "data_offset": 0, 00:14:24.833 "data_size": 0 00:14:24.833 } 00:14:24.833 ] 00:14:24.833 }' 00:14:24.833 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.833 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.092 02:39:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:25.406 [2024-07-25 02:39:12.064303] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:25.406 [2024-07-25 02:39:12.064324] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x87c1ec34500 name Existed_Raid, state configuring 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:25.406 [2024-07-25 02:39:12.240325] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.406 [2024-07-25 02:39:12.240957] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.406 [2024-07-25 02:39:12.240987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.406 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.407 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.666 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.666 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.666 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.666 "name": "Existed_Raid", 00:14:25.666 "uuid": "13793387-4a2f-11ef-9c8e-7947904e2597", 00:14:25.666 "strip_size_kb": 0, 00:14:25.666 "state": "configuring", 00:14:25.666 "raid_level": "raid1", 00:14:25.666 "superblock": true, 00:14:25.666 "num_base_bdevs": 2, 00:14:25.666 "num_base_bdevs_discovered": 1, 00:14:25.666 "num_base_bdevs_operational": 2, 00:14:25.666 "base_bdevs_list": [ 00:14:25.666 { 00:14:25.666 "name": "BaseBdev1", 00:14:25.666 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:25.666 "is_configured": true, 00:14:25.666 "data_offset": 256, 00:14:25.666 "data_size": 7936 00:14:25.666 }, 00:14:25.666 { 00:14:25.666 "name": "BaseBdev2", 00:14:25.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.666 "is_configured": false, 00:14:25.666 "data_offset": 0, 00:14:25.666 "data_size": 0 00:14:25.666 } 00:14:25.666 ] 00:14:25.666 }' 00:14:25.666 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.666 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:25.926 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:14:26.186 [2024-07-25 02:39:12.908472] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.186 [2024-07-25 02:39:12.908513] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x87c1ec34a00 00:14:26.186 [2024-07-25 02:39:12.908517] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:26.186 [2024-07-25 02:39:12.908532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x87c1ec97e20 00:14:26.186 [2024-07-25 02:39:12.908561] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x87c1ec34a00 00:14:26.186 [2024-07-25 02:39:12.908564] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x87c1ec34a00 00:14:26.186 [2024-07-25 02:39:12.908577] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.186 BaseBdev2 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:26.186 02:39:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.446 [ 00:14:26.446 { 00:14:26.446 "name": "BaseBdev2", 00:14:26.446 "aliases": [ 00:14:26.446 "13df23d3-4a2f-11ef-9c8e-7947904e2597" 00:14:26.446 ], 00:14:26.446 "product_name": "Malloc disk", 00:14:26.446 "block_size": 4096, 00:14:26.446 "num_blocks": 8192, 00:14:26.446 "uuid": "13df23d3-4a2f-11ef-9c8e-7947904e2597", 00:14:26.446 "assigned_rate_limits": { 00:14:26.446 "rw_ios_per_sec": 0, 00:14:26.446 "rw_mbytes_per_sec": 0, 00:14:26.446 "r_mbytes_per_sec": 0, 00:14:26.446 "w_mbytes_per_sec": 0 00:14:26.446 }, 00:14:26.446 "claimed": true, 00:14:26.446 "claim_type": "exclusive_write", 00:14:26.446 "zoned": false, 00:14:26.446 "supported_io_types": { 00:14:26.446 "read": true, 00:14:26.446 "write": true, 00:14:26.446 "unmap": true, 00:14:26.446 "flush": true, 00:14:26.446 "reset": true, 00:14:26.446 "nvme_admin": false, 00:14:26.446 "nvme_io": false, 00:14:26.446 "nvme_io_md": false, 00:14:26.446 "write_zeroes": true, 00:14:26.446 "zcopy": true, 00:14:26.446 "get_zone_info": false, 00:14:26.446 "zone_management": false, 00:14:26.446 "zone_append": false, 00:14:26.446 "compare": false, 00:14:26.446 "compare_and_write": false, 00:14:26.446 "abort": true, 00:14:26.446 "seek_hole": false, 00:14:26.446 "seek_data": false, 00:14:26.446 "copy": true, 00:14:26.446 "nvme_iov_md": false 00:14:26.446 }, 00:14:26.446 "memory_domains": [ 00:14:26.446 { 00:14:26.446 "dma_device_id": "system", 00:14:26.446 "dma_device_type": 1 00:14:26.446 }, 00:14:26.446 { 00:14:26.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.446 "dma_device_type": 2 00:14:26.446 } 00:14:26.446 ], 00:14:26.446 "driver_specific": {} 00:14:26.446 } 00:14:26.446 ] 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.446 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.706 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.706 "name": "Existed_Raid", 00:14:26.706 "uuid": "13793387-4a2f-11ef-9c8e-7947904e2597", 00:14:26.706 "strip_size_kb": 0, 00:14:26.706 "state": "online", 00:14:26.706 "raid_level": "raid1", 00:14:26.706 "superblock": true, 00:14:26.706 "num_base_bdevs": 2, 00:14:26.706 "num_base_bdevs_discovered": 2, 00:14:26.706 "num_base_bdevs_operational": 2, 00:14:26.706 "base_bdevs_list": [ 00:14:26.706 { 00:14:26.706 "name": "BaseBdev1", 00:14:26.706 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:26.706 "is_configured": true, 00:14:26.706 "data_offset": 256, 00:14:26.706 "data_size": 7936 00:14:26.706 }, 00:14:26.706 { 00:14:26.706 "name": "BaseBdev2", 00:14:26.706 "uuid": "13df23d3-4a2f-11ef-9c8e-7947904e2597", 00:14:26.706 "is_configured": true, 00:14:26.706 "data_offset": 256, 00:14:26.706 "data_size": 7936 00:14:26.706 } 00:14:26.706 ] 00:14:26.706 }' 00:14:26.706 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.706 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:26.966 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:27.226 [2024-07-25 02:39:13.920503] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:27.226 "name": "Existed_Raid", 00:14:27.226 "aliases": [ 00:14:27.226 "13793387-4a2f-11ef-9c8e-7947904e2597" 00:14:27.226 ], 00:14:27.226 "product_name": "Raid Volume", 00:14:27.226 "block_size": 4096, 00:14:27.226 "num_blocks": 7936, 00:14:27.226 "uuid": "13793387-4a2f-11ef-9c8e-7947904e2597", 00:14:27.226 "assigned_rate_limits": { 00:14:27.226 "rw_ios_per_sec": 0, 00:14:27.226 "rw_mbytes_per_sec": 0, 00:14:27.226 "r_mbytes_per_sec": 0, 00:14:27.226 "w_mbytes_per_sec": 0 00:14:27.226 }, 00:14:27.226 "claimed": false, 00:14:27.226 "zoned": false, 00:14:27.226 "supported_io_types": { 00:14:27.226 "read": true, 00:14:27.226 "write": true, 00:14:27.226 "unmap": false, 00:14:27.226 "flush": false, 00:14:27.226 "reset": true, 00:14:27.226 "nvme_admin": false, 00:14:27.226 "nvme_io": false, 00:14:27.226 "nvme_io_md": false, 00:14:27.226 "write_zeroes": true, 00:14:27.226 "zcopy": false, 00:14:27.226 "get_zone_info": false, 00:14:27.226 "zone_management": false, 00:14:27.226 "zone_append": false, 00:14:27.226 "compare": false, 00:14:27.226 "compare_and_write": false, 00:14:27.226 "abort": false, 00:14:27.226 "seek_hole": false, 00:14:27.226 "seek_data": false, 00:14:27.226 "copy": false, 00:14:27.226 "nvme_iov_md": false 00:14:27.226 }, 00:14:27.226 "memory_domains": [ 00:14:27.226 { 00:14:27.226 "dma_device_id": "system", 00:14:27.226 "dma_device_type": 1 00:14:27.226 }, 00:14:27.226 { 00:14:27.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.226 "dma_device_type": 2 00:14:27.226 }, 00:14:27.226 { 00:14:27.226 "dma_device_id": "system", 00:14:27.226 "dma_device_type": 1 00:14:27.226 }, 00:14:27.226 { 00:14:27.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.226 "dma_device_type": 2 00:14:27.226 } 00:14:27.226 ], 00:14:27.226 "driver_specific": { 00:14:27.226 "raid": { 00:14:27.226 "uuid": "13793387-4a2f-11ef-9c8e-7947904e2597", 00:14:27.226 "strip_size_kb": 0, 00:14:27.226 "state": "online", 00:14:27.226 "raid_level": "raid1", 00:14:27.226 "superblock": true, 00:14:27.226 "num_base_bdevs": 2, 00:14:27.226 "num_base_bdevs_discovered": 2, 00:14:27.226 "num_base_bdevs_operational": 2, 00:14:27.226 "base_bdevs_list": [ 00:14:27.226 { 00:14:27.226 "name": "BaseBdev1", 00:14:27.226 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:27.226 "is_configured": true, 00:14:27.226 "data_offset": 256, 00:14:27.226 "data_size": 7936 00:14:27.226 }, 00:14:27.226 { 00:14:27.226 "name": "BaseBdev2", 00:14:27.226 "uuid": "13df23d3-4a2f-11ef-9c8e-7947904e2597", 00:14:27.226 "is_configured": true, 00:14:27.226 "data_offset": 256, 00:14:27.226 "data_size": 7936 00:14:27.226 } 00:14:27.226 ] 00:14:27.226 } 00:14:27.226 } 00:14:27.226 }' 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:27.226 BaseBdev2' 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:27.226 02:39:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.486 "name": "BaseBdev1", 00:14:27.486 "aliases": [ 00:14:27.486 "12c214cd-4a2f-11ef-9c8e-7947904e2597" 00:14:27.486 ], 00:14:27.486 "product_name": "Malloc disk", 00:14:27.486 "block_size": 4096, 00:14:27.486 "num_blocks": 8192, 00:14:27.486 "uuid": "12c214cd-4a2f-11ef-9c8e-7947904e2597", 00:14:27.486 "assigned_rate_limits": { 00:14:27.486 "rw_ios_per_sec": 0, 00:14:27.486 "rw_mbytes_per_sec": 0, 00:14:27.486 "r_mbytes_per_sec": 0, 00:14:27.486 "w_mbytes_per_sec": 0 00:14:27.486 }, 00:14:27.486 "claimed": true, 00:14:27.486 "claim_type": "exclusive_write", 00:14:27.486 "zoned": false, 00:14:27.486 "supported_io_types": { 00:14:27.486 "read": true, 00:14:27.486 "write": true, 00:14:27.486 "unmap": true, 00:14:27.486 "flush": true, 00:14:27.486 "reset": true, 00:14:27.486 "nvme_admin": false, 00:14:27.486 "nvme_io": false, 00:14:27.486 "nvme_io_md": false, 00:14:27.486 "write_zeroes": true, 00:14:27.486 "zcopy": true, 00:14:27.486 "get_zone_info": false, 00:14:27.486 "zone_management": false, 00:14:27.486 "zone_append": false, 00:14:27.486 "compare": false, 00:14:27.486 "compare_and_write": false, 00:14:27.486 "abort": true, 00:14:27.486 "seek_hole": false, 00:14:27.486 "seek_data": false, 00:14:27.486 "copy": true, 00:14:27.486 "nvme_iov_md": false 00:14:27.486 }, 00:14:27.486 "memory_domains": [ 00:14:27.486 { 00:14:27.486 "dma_device_id": "system", 00:14:27.486 "dma_device_type": 1 00:14:27.486 }, 00:14:27.486 { 00:14:27.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.486 "dma_device_type": 2 00:14:27.486 } 00:14:27.486 ], 00:14:27.486 "driver_specific": {} 00:14:27.486 }' 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:27.486 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.747 "name": "BaseBdev2", 00:14:27.747 "aliases": [ 00:14:27.747 "13df23d3-4a2f-11ef-9c8e-7947904e2597" 00:14:27.747 ], 00:14:27.747 "product_name": "Malloc disk", 00:14:27.747 "block_size": 4096, 00:14:27.747 "num_blocks": 8192, 00:14:27.747 "uuid": "13df23d3-4a2f-11ef-9c8e-7947904e2597", 00:14:27.747 "assigned_rate_limits": { 00:14:27.747 "rw_ios_per_sec": 0, 00:14:27.747 "rw_mbytes_per_sec": 0, 00:14:27.747 "r_mbytes_per_sec": 0, 00:14:27.747 "w_mbytes_per_sec": 0 00:14:27.747 }, 00:14:27.747 "claimed": true, 00:14:27.747 "claim_type": "exclusive_write", 00:14:27.747 "zoned": false, 00:14:27.747 "supported_io_types": { 00:14:27.747 "read": true, 00:14:27.747 "write": true, 00:14:27.747 "unmap": true, 00:14:27.747 "flush": true, 00:14:27.747 "reset": true, 00:14:27.747 "nvme_admin": false, 00:14:27.747 "nvme_io": false, 00:14:27.747 "nvme_io_md": false, 00:14:27.747 "write_zeroes": true, 00:14:27.747 "zcopy": true, 00:14:27.747 "get_zone_info": false, 00:14:27.747 "zone_management": false, 00:14:27.747 "zone_append": false, 00:14:27.747 "compare": false, 00:14:27.747 "compare_and_write": false, 00:14:27.747 "abort": true, 00:14:27.747 "seek_hole": false, 00:14:27.747 "seek_data": false, 00:14:27.747 "copy": true, 00:14:27.747 "nvme_iov_md": false 00:14:27.747 }, 00:14:27.747 "memory_domains": [ 00:14:27.747 { 00:14:27.747 "dma_device_id": "system", 00:14:27.747 "dma_device_type": 1 00:14:27.747 }, 00:14:27.747 { 00:14:27.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.747 "dma_device_type": 2 00:14:27.747 } 00:14:27.747 ], 00:14:27.747 "driver_specific": {} 00:14:27.747 }' 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.747 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:28.009 [2024-07-25 02:39:14.696572] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.009 "name": "Existed_Raid", 00:14:28.009 "uuid": "13793387-4a2f-11ef-9c8e-7947904e2597", 00:14:28.009 "strip_size_kb": 0, 00:14:28.009 "state": "online", 00:14:28.009 "raid_level": "raid1", 00:14:28.009 "superblock": true, 00:14:28.009 "num_base_bdevs": 2, 00:14:28.009 "num_base_bdevs_discovered": 1, 00:14:28.009 "num_base_bdevs_operational": 1, 00:14:28.009 "base_bdevs_list": [ 00:14:28.009 { 00:14:28.009 "name": null, 00:14:28.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.009 "is_configured": false, 00:14:28.009 "data_offset": 256, 00:14:28.009 "data_size": 7936 00:14:28.009 }, 00:14:28.009 { 00:14:28.009 "name": "BaseBdev2", 00:14:28.009 "uuid": "13df23d3-4a2f-11ef-9c8e-7947904e2597", 00:14:28.009 "is_configured": true, 00:14:28.009 "data_offset": 256, 00:14:28.009 "data_size": 7936 00:14:28.009 } 00:14:28.009 ] 00:14:28.009 }' 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.009 02:39:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:28.579 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:28.839 [2024-07-25 02:39:15.545413] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.839 [2024-07-25 02:39:15.545442] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.839 [2024-07-25 02:39:15.550212] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.839 [2024-07-25 02:39:15.550224] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.839 [2024-07-25 02:39:15.550228] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x87c1ec34a00 name Existed_Raid, state offline 00:14:28.839 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:28.839 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:28.839 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.839 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65000 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65000 ']' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65000 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65000 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:29.099 killing process with pid 65000 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65000' 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65000 00:14:29.099 [2024-07-25 02:39:15.773792] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.099 [2024-07-25 02:39:15.773815] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65000 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:14:29.099 00:14:29.099 real 0m7.026s 00:14:29.099 user 0m11.854s 00:14:29.099 sys 0m1.527s 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.099 02:39:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.099 ************************************ 00:14:29.099 END TEST raid_state_function_test_sb_4k 00:14:29.099 ************************************ 00:14:29.100 02:39:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:29.100 02:39:15 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:29.100 02:39:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:29.100 02:39:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.100 02:39:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.360 ************************************ 00:14:29.360 START TEST raid_superblock_test_4k 00:14:29.360 ************************************ 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65266 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65266 /var/tmp/spdk-raid.sock 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65266 ']' 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.360 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:29.360 [2024-07-25 02:39:16.026086] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:29.360 [2024-07-25 02:39:16.026390] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:29.931 EAL: TSC is not safe to use in SMP mode 00:14:29.931 EAL: TSC is not invariant 00:14:29.931 [2024-07-25 02:39:16.766158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.191 [2024-07-25 02:39:16.856539] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:30.191 [2024-07-25 02:39:16.858184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.191 [2024-07-25 02:39:16.858749] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.191 [2024-07-25 02:39:16.858760] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.191 02:39:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:14:30.452 malloc1 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.452 [2024-07-25 02:39:17.265609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.452 [2024-07-25 02:39:17.265653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.452 [2024-07-25 02:39:17.265661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634780 00:14:30.452 [2024-07-25 02:39:17.265666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.452 [2024-07-25 02:39:17.266314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.452 [2024-07-25 02:39:17.266342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.452 pt1 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:30.452 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:14:30.712 malloc2 00:14:30.712 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.973 [2024-07-25 02:39:17.625647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.973 [2024-07-25 02:39:17.625687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.973 [2024-07-25 02:39:17.625695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634c80 00:14:30.973 [2024-07-25 02:39:17.625701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.973 [2024-07-25 02:39:17.626119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.973 [2024-07-25 02:39:17.626146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.973 pt2 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:30.973 [2024-07-25 02:39:17.797661] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.973 [2024-07-25 02:39:17.798048] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.973 [2024-07-25 02:39:17.798103] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x106877634f00 00:14:30.973 [2024-07-25 02:39:17.798108] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:30.973 [2024-07-25 02:39:17.798137] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x106877697e20 00:14:30.973 [2024-07-25 02:39:17.798186] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x106877634f00 00:14:30.973 [2024-07-25 02:39:17.798189] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x106877634f00 00:14:30.973 [2024-07-25 02:39:17.798206] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.973 02:39:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.233 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.233 "name": "raid_bdev1", 00:14:31.233 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:31.233 "strip_size_kb": 0, 00:14:31.233 "state": "online", 00:14:31.233 "raid_level": "raid1", 00:14:31.233 "superblock": true, 00:14:31.233 "num_base_bdevs": 2, 00:14:31.233 "num_base_bdevs_discovered": 2, 00:14:31.233 "num_base_bdevs_operational": 2, 00:14:31.233 "base_bdevs_list": [ 00:14:31.233 { 00:14:31.233 "name": "pt1", 00:14:31.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.233 "is_configured": true, 00:14:31.233 "data_offset": 256, 00:14:31.233 "data_size": 7936 00:14:31.233 }, 00:14:31.233 { 00:14:31.233 "name": "pt2", 00:14:31.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.233 "is_configured": true, 00:14:31.233 "data_offset": 256, 00:14:31.233 "data_size": 7936 00:14:31.233 } 00:14:31.233 ] 00:14:31.233 }' 00:14:31.233 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.233 02:39:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:31.494 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:31.754 [2024-07-25 02:39:18.477734] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.754 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:31.754 "name": "raid_bdev1", 00:14:31.754 "aliases": [ 00:14:31.754 "16c92ef9-4a2f-11ef-9c8e-7947904e2597" 00:14:31.754 ], 00:14:31.754 "product_name": "Raid Volume", 00:14:31.754 "block_size": 4096, 00:14:31.754 "num_blocks": 7936, 00:14:31.754 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:31.754 "assigned_rate_limits": { 00:14:31.754 "rw_ios_per_sec": 0, 00:14:31.754 "rw_mbytes_per_sec": 0, 00:14:31.754 "r_mbytes_per_sec": 0, 00:14:31.754 "w_mbytes_per_sec": 0 00:14:31.754 }, 00:14:31.754 "claimed": false, 00:14:31.754 "zoned": false, 00:14:31.754 "supported_io_types": { 00:14:31.754 "read": true, 00:14:31.754 "write": true, 00:14:31.754 "unmap": false, 00:14:31.754 "flush": false, 00:14:31.754 "reset": true, 00:14:31.754 "nvme_admin": false, 00:14:31.754 "nvme_io": false, 00:14:31.754 "nvme_io_md": false, 00:14:31.754 "write_zeroes": true, 00:14:31.754 "zcopy": false, 00:14:31.754 "get_zone_info": false, 00:14:31.754 "zone_management": false, 00:14:31.754 "zone_append": false, 00:14:31.755 "compare": false, 00:14:31.755 "compare_and_write": false, 00:14:31.755 "abort": false, 00:14:31.755 "seek_hole": false, 00:14:31.755 "seek_data": false, 00:14:31.755 "copy": false, 00:14:31.755 "nvme_iov_md": false 00:14:31.755 }, 00:14:31.755 "memory_domains": [ 00:14:31.755 { 00:14:31.755 "dma_device_id": "system", 00:14:31.755 "dma_device_type": 1 00:14:31.755 }, 00:14:31.755 { 00:14:31.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.755 "dma_device_type": 2 00:14:31.755 }, 00:14:31.755 { 00:14:31.755 "dma_device_id": "system", 00:14:31.755 "dma_device_type": 1 00:14:31.755 }, 00:14:31.755 { 00:14:31.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.755 "dma_device_type": 2 00:14:31.755 } 00:14:31.755 ], 00:14:31.755 "driver_specific": { 00:14:31.755 "raid": { 00:14:31.755 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:31.755 "strip_size_kb": 0, 00:14:31.755 "state": "online", 00:14:31.755 "raid_level": "raid1", 00:14:31.755 "superblock": true, 00:14:31.755 "num_base_bdevs": 2, 00:14:31.755 "num_base_bdevs_discovered": 2, 00:14:31.755 "num_base_bdevs_operational": 2, 00:14:31.755 "base_bdevs_list": [ 00:14:31.755 { 00:14:31.755 "name": "pt1", 00:14:31.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.755 "is_configured": true, 00:14:31.755 "data_offset": 256, 00:14:31.755 "data_size": 7936 00:14:31.755 }, 00:14:31.755 { 00:14:31.755 "name": "pt2", 00:14:31.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.755 "is_configured": true, 00:14:31.755 "data_offset": 256, 00:14:31.755 "data_size": 7936 00:14:31.755 } 00:14:31.755 ] 00:14:31.755 } 00:14:31.755 } 00:14:31.755 }' 00:14:31.755 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.755 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:31.755 pt2' 00:14:31.755 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.755 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.755 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:32.014 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.014 "name": "pt1", 00:14:32.014 "aliases": [ 00:14:32.014 "00000000-0000-0000-0000-000000000001" 00:14:32.014 ], 00:14:32.014 "product_name": "passthru", 00:14:32.014 "block_size": 4096, 00:14:32.014 "num_blocks": 8192, 00:14:32.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.014 "assigned_rate_limits": { 00:14:32.014 "rw_ios_per_sec": 0, 00:14:32.014 "rw_mbytes_per_sec": 0, 00:14:32.014 "r_mbytes_per_sec": 0, 00:14:32.014 "w_mbytes_per_sec": 0 00:14:32.014 }, 00:14:32.014 "claimed": true, 00:14:32.014 "claim_type": "exclusive_write", 00:14:32.014 "zoned": false, 00:14:32.014 "supported_io_types": { 00:14:32.014 "read": true, 00:14:32.014 "write": true, 00:14:32.014 "unmap": true, 00:14:32.014 "flush": true, 00:14:32.014 "reset": true, 00:14:32.014 "nvme_admin": false, 00:14:32.014 "nvme_io": false, 00:14:32.014 "nvme_io_md": false, 00:14:32.014 "write_zeroes": true, 00:14:32.014 "zcopy": true, 00:14:32.014 "get_zone_info": false, 00:14:32.014 "zone_management": false, 00:14:32.014 "zone_append": false, 00:14:32.014 "compare": false, 00:14:32.014 "compare_and_write": false, 00:14:32.014 "abort": true, 00:14:32.014 "seek_hole": false, 00:14:32.014 "seek_data": false, 00:14:32.014 "copy": true, 00:14:32.014 "nvme_iov_md": false 00:14:32.014 }, 00:14:32.014 "memory_domains": [ 00:14:32.014 { 00:14:32.015 "dma_device_id": "system", 00:14:32.015 "dma_device_type": 1 00:14:32.015 }, 00:14:32.015 { 00:14:32.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.015 "dma_device_type": 2 00:14:32.015 } 00:14:32.015 ], 00:14:32.015 "driver_specific": { 00:14:32.015 "passthru": { 00:14:32.015 "name": "pt1", 00:14:32.015 "base_bdev_name": "malloc1" 00:14:32.015 } 00:14:32.015 } 00:14:32.015 }' 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:32.015 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:32.274 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.274 "name": "pt2", 00:14:32.274 "aliases": [ 00:14:32.274 "00000000-0000-0000-0000-000000000002" 00:14:32.274 ], 00:14:32.274 "product_name": "passthru", 00:14:32.274 "block_size": 4096, 00:14:32.274 "num_blocks": 8192, 00:14:32.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.274 "assigned_rate_limits": { 00:14:32.274 "rw_ios_per_sec": 0, 00:14:32.274 "rw_mbytes_per_sec": 0, 00:14:32.274 "r_mbytes_per_sec": 0, 00:14:32.274 "w_mbytes_per_sec": 0 00:14:32.274 }, 00:14:32.275 "claimed": true, 00:14:32.275 "claim_type": "exclusive_write", 00:14:32.275 "zoned": false, 00:14:32.275 "supported_io_types": { 00:14:32.275 "read": true, 00:14:32.275 "write": true, 00:14:32.275 "unmap": true, 00:14:32.275 "flush": true, 00:14:32.275 "reset": true, 00:14:32.275 "nvme_admin": false, 00:14:32.275 "nvme_io": false, 00:14:32.275 "nvme_io_md": false, 00:14:32.275 "write_zeroes": true, 00:14:32.275 "zcopy": true, 00:14:32.275 "get_zone_info": false, 00:14:32.275 "zone_management": false, 00:14:32.275 "zone_append": false, 00:14:32.275 "compare": false, 00:14:32.275 "compare_and_write": false, 00:14:32.275 "abort": true, 00:14:32.275 "seek_hole": false, 00:14:32.275 "seek_data": false, 00:14:32.275 "copy": true, 00:14:32.275 "nvme_iov_md": false 00:14:32.275 }, 00:14:32.275 "memory_domains": [ 00:14:32.275 { 00:14:32.275 "dma_device_id": "system", 00:14:32.275 "dma_device_type": 1 00:14:32.275 }, 00:14:32.275 { 00:14:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.275 "dma_device_type": 2 00:14:32.275 } 00:14:32.275 ], 00:14:32.275 "driver_specific": { 00:14:32.275 "passthru": { 00:14:32.275 "name": "pt2", 00:14:32.275 "base_bdev_name": "malloc2" 00:14:32.275 } 00:14:32.275 } 00:14:32.275 }' 00:14:32.275 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.275 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.275 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:32.275 02:39:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:32.275 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:32.534 [2024-07-25 02:39:19.225792] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.534 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=16c92ef9-4a2f-11ef-9c8e-7947904e2597 00:14:32.534 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 16c92ef9-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:32.534 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:32.534 [2024-07-25 02:39:19.405787] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.534 [2024-07-25 02:39:19.405799] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.534 [2024-07-25 02:39:19.405813] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.534 [2024-07-25 02:39:19.405825] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.534 [2024-07-25 02:39:19.405828] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877634f00 name raid_bdev1, state offline 00:14:32.534 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.534 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:32.793 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:32.793 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:32.793 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:32.793 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:33.052 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:33.052 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:33.311 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:33.311 02:39:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:33.311 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:33.570 [2024-07-25 02:39:20.345895] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:33.570 [2024-07-25 02:39:20.346328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:33.570 [2024-07-25 02:39:20.346349] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:33.570 [2024-07-25 02:39:20.346375] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:33.570 [2024-07-25 02:39:20.346382] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.570 [2024-07-25 02:39:20.346386] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877634c80 name raid_bdev1, state configuring 00:14:33.570 request: 00:14:33.570 { 00:14:33.570 "name": "raid_bdev1", 00:14:33.570 "raid_level": "raid1", 00:14:33.570 "base_bdevs": [ 00:14:33.570 "malloc1", 00:14:33.570 "malloc2" 00:14:33.570 ], 00:14:33.570 "superblock": false, 00:14:33.570 "method": "bdev_raid_create", 00:14:33.570 "req_id": 1 00:14:33.570 } 00:14:33.570 Got JSON-RPC error response 00:14:33.570 response: 00:14:33.570 { 00:14:33.570 "code": -17, 00:14:33.570 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:33.570 } 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.571 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:33.830 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:33.830 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:33.830 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.830 [2024-07-25 02:39:20.729926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.830 [2024-07-25 02:39:20.729964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.830 [2024-07-25 02:39:20.729972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634780 00:14:33.830 [2024-07-25 02:39:20.729978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.830 [2024-07-25 02:39:20.730383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.830 [2024-07-25 02:39:20.730411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.830 [2024-07-25 02:39:20.730428] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:33.830 [2024-07-25 02:39:20.730438] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.089 pt1 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.089 "name": "raid_bdev1", 00:14:34.089 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:34.089 "strip_size_kb": 0, 00:14:34.089 "state": "configuring", 00:14:34.089 "raid_level": "raid1", 00:14:34.089 "superblock": true, 00:14:34.089 "num_base_bdevs": 2, 00:14:34.089 "num_base_bdevs_discovered": 1, 00:14:34.089 "num_base_bdevs_operational": 2, 00:14:34.089 "base_bdevs_list": [ 00:14:34.089 { 00:14:34.089 "name": "pt1", 00:14:34.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.089 "is_configured": true, 00:14:34.089 "data_offset": 256, 00:14:34.089 "data_size": 7936 00:14:34.089 }, 00:14:34.089 { 00:14:34.089 "name": null, 00:14:34.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.089 "is_configured": false, 00:14:34.089 "data_offset": 256, 00:14:34.089 "data_size": 7936 00:14:34.089 } 00:14:34.089 ] 00:14:34.089 }' 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.089 02:39:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:34.347 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:34.347 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:34.347 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:34.347 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.606 [2024-07-25 02:39:21.389985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.606 [2024-07-25 02:39:21.390021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.606 [2024-07-25 02:39:21.390028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634f00 00:14:34.606 [2024-07-25 02:39:21.390034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.606 [2024-07-25 02:39:21.390111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.606 [2024-07-25 02:39:21.390117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.606 [2024-07-25 02:39:21.390131] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:34.606 [2024-07-25 02:39:21.390137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.606 [2024-07-25 02:39:21.390157] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x106877635180 00:14:34.606 [2024-07-25 02:39:21.390160] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:34.606 [2024-07-25 02:39:21.390174] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x106877697e20 00:14:34.606 [2024-07-25 02:39:21.390208] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x106877635180 00:14:34.606 [2024-07-25 02:39:21.390211] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x106877635180 00:14:34.606 [2024-07-25 02:39:21.390226] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.606 pt2 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.606 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.865 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.865 "name": "raid_bdev1", 00:14:34.865 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:34.865 "strip_size_kb": 0, 00:14:34.865 "state": "online", 00:14:34.865 "raid_level": "raid1", 00:14:34.865 "superblock": true, 00:14:34.865 "num_base_bdevs": 2, 00:14:34.865 "num_base_bdevs_discovered": 2, 00:14:34.865 "num_base_bdevs_operational": 2, 00:14:34.865 "base_bdevs_list": [ 00:14:34.865 { 00:14:34.866 "name": "pt1", 00:14:34.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.866 "is_configured": true, 00:14:34.866 "data_offset": 256, 00:14:34.866 "data_size": 7936 00:14:34.866 }, 00:14:34.866 { 00:14:34.866 "name": "pt2", 00:14:34.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.866 "is_configured": true, 00:14:34.866 "data_offset": 256, 00:14:34.866 "data_size": 7936 00:14:34.866 } 00:14:34.866 ] 00:14:34.866 }' 00:14:34.866 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.866 02:39:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.125 02:39:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:35.385 [2024-07-25 02:39:22.034066] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:35.385 "name": "raid_bdev1", 00:14:35.385 "aliases": [ 00:14:35.385 "16c92ef9-4a2f-11ef-9c8e-7947904e2597" 00:14:35.385 ], 00:14:35.385 "product_name": "Raid Volume", 00:14:35.385 "block_size": 4096, 00:14:35.385 "num_blocks": 7936, 00:14:35.385 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:35.385 "assigned_rate_limits": { 00:14:35.385 "rw_ios_per_sec": 0, 00:14:35.385 "rw_mbytes_per_sec": 0, 00:14:35.385 "r_mbytes_per_sec": 0, 00:14:35.385 "w_mbytes_per_sec": 0 00:14:35.385 }, 00:14:35.385 "claimed": false, 00:14:35.385 "zoned": false, 00:14:35.385 "supported_io_types": { 00:14:35.385 "read": true, 00:14:35.385 "write": true, 00:14:35.385 "unmap": false, 00:14:35.385 "flush": false, 00:14:35.385 "reset": true, 00:14:35.385 "nvme_admin": false, 00:14:35.385 "nvme_io": false, 00:14:35.385 "nvme_io_md": false, 00:14:35.385 "write_zeroes": true, 00:14:35.385 "zcopy": false, 00:14:35.385 "get_zone_info": false, 00:14:35.385 "zone_management": false, 00:14:35.385 "zone_append": false, 00:14:35.385 "compare": false, 00:14:35.385 "compare_and_write": false, 00:14:35.385 "abort": false, 00:14:35.385 "seek_hole": false, 00:14:35.385 "seek_data": false, 00:14:35.385 "copy": false, 00:14:35.385 "nvme_iov_md": false 00:14:35.385 }, 00:14:35.385 "memory_domains": [ 00:14:35.385 { 00:14:35.385 "dma_device_id": "system", 00:14:35.385 "dma_device_type": 1 00:14:35.385 }, 00:14:35.385 { 00:14:35.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.385 "dma_device_type": 2 00:14:35.385 }, 00:14:35.385 { 00:14:35.385 "dma_device_id": "system", 00:14:35.385 "dma_device_type": 1 00:14:35.385 }, 00:14:35.385 { 00:14:35.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.385 "dma_device_type": 2 00:14:35.385 } 00:14:35.385 ], 00:14:35.385 "driver_specific": { 00:14:35.385 "raid": { 00:14:35.385 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:35.385 "strip_size_kb": 0, 00:14:35.385 "state": "online", 00:14:35.385 "raid_level": "raid1", 00:14:35.385 "superblock": true, 00:14:35.385 "num_base_bdevs": 2, 00:14:35.385 "num_base_bdevs_discovered": 2, 00:14:35.385 "num_base_bdevs_operational": 2, 00:14:35.385 "base_bdevs_list": [ 00:14:35.385 { 00:14:35.385 "name": "pt1", 00:14:35.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.385 "is_configured": true, 00:14:35.385 "data_offset": 256, 00:14:35.385 "data_size": 7936 00:14:35.385 }, 00:14:35.385 { 00:14:35.385 "name": "pt2", 00:14:35.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.385 "is_configured": true, 00:14:35.385 "data_offset": 256, 00:14:35.385 "data_size": 7936 00:14:35.385 } 00:14:35.385 ] 00:14:35.385 } 00:14:35.385 } 00:14:35.385 }' 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:35.385 pt2' 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.385 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.385 "name": "pt1", 00:14:35.385 "aliases": [ 00:14:35.385 "00000000-0000-0000-0000-000000000001" 00:14:35.385 ], 00:14:35.385 "product_name": "passthru", 00:14:35.385 "block_size": 4096, 00:14:35.385 "num_blocks": 8192, 00:14:35.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.385 "assigned_rate_limits": { 00:14:35.385 "rw_ios_per_sec": 0, 00:14:35.385 "rw_mbytes_per_sec": 0, 00:14:35.385 "r_mbytes_per_sec": 0, 00:14:35.385 "w_mbytes_per_sec": 0 00:14:35.385 }, 00:14:35.385 "claimed": true, 00:14:35.385 "claim_type": "exclusive_write", 00:14:35.385 "zoned": false, 00:14:35.385 "supported_io_types": { 00:14:35.385 "read": true, 00:14:35.385 "write": true, 00:14:35.385 "unmap": true, 00:14:35.385 "flush": true, 00:14:35.386 "reset": true, 00:14:35.386 "nvme_admin": false, 00:14:35.386 "nvme_io": false, 00:14:35.386 "nvme_io_md": false, 00:14:35.386 "write_zeroes": true, 00:14:35.386 "zcopy": true, 00:14:35.386 "get_zone_info": false, 00:14:35.386 "zone_management": false, 00:14:35.386 "zone_append": false, 00:14:35.386 "compare": false, 00:14:35.386 "compare_and_write": false, 00:14:35.386 "abort": true, 00:14:35.386 "seek_hole": false, 00:14:35.386 "seek_data": false, 00:14:35.386 "copy": true, 00:14:35.386 "nvme_iov_md": false 00:14:35.386 }, 00:14:35.386 "memory_domains": [ 00:14:35.386 { 00:14:35.386 "dma_device_id": "system", 00:14:35.386 "dma_device_type": 1 00:14:35.386 }, 00:14:35.386 { 00:14:35.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.386 "dma_device_type": 2 00:14:35.386 } 00:14:35.386 ], 00:14:35.386 "driver_specific": { 00:14:35.386 "passthru": { 00:14:35.386 "name": "pt1", 00:14:35.386 "base_bdev_name": "malloc1" 00:14:35.386 } 00:14:35.386 } 00:14:35.386 }' 00:14:35.386 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.386 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.386 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:35.386 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.386 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.645 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:35.646 "name": "pt2", 00:14:35.646 "aliases": [ 00:14:35.646 "00000000-0000-0000-0000-000000000002" 00:14:35.646 ], 00:14:35.646 "product_name": "passthru", 00:14:35.646 "block_size": 4096, 00:14:35.646 "num_blocks": 8192, 00:14:35.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.646 "assigned_rate_limits": { 00:14:35.646 "rw_ios_per_sec": 0, 00:14:35.646 "rw_mbytes_per_sec": 0, 00:14:35.646 "r_mbytes_per_sec": 0, 00:14:35.646 "w_mbytes_per_sec": 0 00:14:35.646 }, 00:14:35.646 "claimed": true, 00:14:35.646 "claim_type": "exclusive_write", 00:14:35.646 "zoned": false, 00:14:35.646 "supported_io_types": { 00:14:35.646 "read": true, 00:14:35.646 "write": true, 00:14:35.646 "unmap": true, 00:14:35.646 "flush": true, 00:14:35.646 "reset": true, 00:14:35.646 "nvme_admin": false, 00:14:35.646 "nvme_io": false, 00:14:35.646 "nvme_io_md": false, 00:14:35.646 "write_zeroes": true, 00:14:35.646 "zcopy": true, 00:14:35.646 "get_zone_info": false, 00:14:35.646 "zone_management": false, 00:14:35.646 "zone_append": false, 00:14:35.646 "compare": false, 00:14:35.646 "compare_and_write": false, 00:14:35.646 "abort": true, 00:14:35.646 "seek_hole": false, 00:14:35.646 "seek_data": false, 00:14:35.646 "copy": true, 00:14:35.646 "nvme_iov_md": false 00:14:35.646 }, 00:14:35.646 "memory_domains": [ 00:14:35.646 { 00:14:35.646 "dma_device_id": "system", 00:14:35.646 "dma_device_type": 1 00:14:35.646 }, 00:14:35.646 { 00:14:35.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.646 "dma_device_type": 2 00:14:35.646 } 00:14:35.646 ], 00:14:35.646 "driver_specific": { 00:14:35.646 "passthru": { 00:14:35.646 "name": "pt2", 00:14:35.646 "base_bdev_name": "malloc2" 00:14:35.646 } 00:14:35.646 } 00:14:35.646 }' 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.646 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.906 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:35.906 [2024-07-25 02:39:22.798120] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.165 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 16c92ef9-4a2f-11ef-9c8e-7947904e2597 '!=' 16c92ef9-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:36.165 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:36.165 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:36.165 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:14:36.165 02:39:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:36.165 [2024-07-25 02:39:22.990125] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.165 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.424 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.424 "name": "raid_bdev1", 00:14:36.424 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:36.424 "strip_size_kb": 0, 00:14:36.424 "state": "online", 00:14:36.424 "raid_level": "raid1", 00:14:36.424 "superblock": true, 00:14:36.424 "num_base_bdevs": 2, 00:14:36.424 "num_base_bdevs_discovered": 1, 00:14:36.424 "num_base_bdevs_operational": 1, 00:14:36.424 "base_bdevs_list": [ 00:14:36.424 { 00:14:36.424 "name": null, 00:14:36.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.424 "is_configured": false, 00:14:36.424 "data_offset": 256, 00:14:36.424 "data_size": 7936 00:14:36.424 }, 00:14:36.424 { 00:14:36.424 "name": "pt2", 00:14:36.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.424 "is_configured": true, 00:14:36.424 "data_offset": 256, 00:14:36.424 "data_size": 7936 00:14:36.424 } 00:14:36.424 ] 00:14:36.424 }' 00:14:36.424 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.424 02:39:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:36.683 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:36.942 [2024-07-25 02:39:23.646191] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.942 [2024-07-25 02:39:23.646203] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.942 [2024-07-25 02:39:23.646213] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.942 [2024-07-25 02:39:23.646221] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.942 [2024-07-25 02:39:23.646224] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877635180 name raid_bdev1, state offline 00:14:36.942 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.942 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:37.201 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:37.201 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:37.201 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:37.201 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:37.201 02:39:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:14:37.201 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.460 [2024-07-25 02:39:24.194245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.460 [2024-07-25 02:39:24.194278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.460 [2024-07-25 02:39:24.194285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634f00 00:14:37.460 [2024-07-25 02:39:24.194291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.460 [2024-07-25 02:39:24.194780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.460 [2024-07-25 02:39:24.194806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.460 [2024-07-25 02:39:24.194823] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.460 [2024-07-25 02:39:24.194848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.460 [2024-07-25 02:39:24.194866] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x106877635180 00:14:37.460 [2024-07-25 02:39:24.194870] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:37.460 [2024-07-25 02:39:24.194886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x106877697e20 00:14:37.460 [2024-07-25 02:39:24.194918] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x106877635180 00:14:37.460 [2024-07-25 02:39:24.194921] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x106877635180 00:14:37.460 [2024-07-25 02:39:24.194937] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.460 pt2 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.460 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.719 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.719 "name": "raid_bdev1", 00:14:37.719 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:37.719 "strip_size_kb": 0, 00:14:37.719 "state": "online", 00:14:37.719 "raid_level": "raid1", 00:14:37.719 "superblock": true, 00:14:37.719 "num_base_bdevs": 2, 00:14:37.719 "num_base_bdevs_discovered": 1, 00:14:37.719 "num_base_bdevs_operational": 1, 00:14:37.719 "base_bdevs_list": [ 00:14:37.719 { 00:14:37.719 "name": null, 00:14:37.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.719 "is_configured": false, 00:14:37.719 "data_offset": 256, 00:14:37.719 "data_size": 7936 00:14:37.719 }, 00:14:37.719 { 00:14:37.719 "name": "pt2", 00:14:37.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.719 "is_configured": true, 00:14:37.719 "data_offset": 256, 00:14:37.719 "data_size": 7936 00:14:37.719 } 00:14:37.719 ] 00:14:37.719 }' 00:14:37.719 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.719 02:39:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:37.979 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.979 [2024-07-25 02:39:24.846293] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.979 [2024-07-25 02:39:24.846308] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.979 [2024-07-25 02:39:24.846323] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.979 [2024-07-25 02:39:24.846332] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.979 [2024-07-25 02:39:24.846335] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877635180 name raid_bdev1, state offline 00:14:37.979 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.979 02:39:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:38.238 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:38.238 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:38.238 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:14:38.238 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.498 [2024-07-25 02:39:25.238334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.498 [2024-07-25 02:39:25.238363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.498 [2024-07-25 02:39:25.238370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106877634c80 00:14:38.498 [2024-07-25 02:39:25.238375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.498 [2024-07-25 02:39:25.238840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.498 [2024-07-25 02:39:25.238864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.498 [2024-07-25 02:39:25.238881] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:38.498 [2024-07-25 02:39:25.238890] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.498 [2024-07-25 02:39:25.238910] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:38.498 [2024-07-25 02:39:25.238913] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.498 [2024-07-25 02:39:25.238917] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877634780 name raid_bdev1, state configuring 00:14:38.498 [2024-07-25 02:39:25.238929] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.498 [2024-07-25 02:39:25.238940] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x106877634780 00:14:38.498 [2024-07-25 02:39:25.238943] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:38.498 [2024-07-25 02:39:25.238958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x106877697e20 00:14:38.498 [2024-07-25 02:39:25.238987] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x106877634780 00:14:38.498 [2024-07-25 02:39:25.238991] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x106877634780 00:14:38.498 [2024-07-25 02:39:25.239005] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.498 pt1 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:38.498 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.499 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.758 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.758 "name": "raid_bdev1", 00:14:38.758 "uuid": "16c92ef9-4a2f-11ef-9c8e-7947904e2597", 00:14:38.758 "strip_size_kb": 0, 00:14:38.758 "state": "online", 00:14:38.758 "raid_level": "raid1", 00:14:38.758 "superblock": true, 00:14:38.758 "num_base_bdevs": 2, 00:14:38.758 "num_base_bdevs_discovered": 1, 00:14:38.758 "num_base_bdevs_operational": 1, 00:14:38.758 "base_bdevs_list": [ 00:14:38.758 { 00:14:38.758 "name": null, 00:14:38.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.758 "is_configured": false, 00:14:38.758 "data_offset": 256, 00:14:38.758 "data_size": 7936 00:14:38.758 }, 00:14:38.758 { 00:14:38.759 "name": "pt2", 00:14:38.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.759 "is_configured": true, 00:14:38.759 "data_offset": 256, 00:14:38.759 "data_size": 7936 00:14:38.759 } 00:14:38.759 ] 00:14:38.759 }' 00:14:38.759 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.759 02:39:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:39.018 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:39.018 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:39.018 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:39.018 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.018 02:39:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:39.277 [2024-07-25 02:39:26.078435] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 16c92ef9-4a2f-11ef-9c8e-7947904e2597 '!=' 16c92ef9-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65266 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65266 ']' 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65266 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65266 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:39.277 killing process with pid 65266 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65266' 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65266 00:14:39.277 [2024-07-25 02:39:26.121256] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.277 [2024-07-25 02:39:26.121283] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.277 [2024-07-25 02:39:26.121292] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.277 [2024-07-25 02:39:26.121296] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x106877634780 name raid_bdev1, state offline 00:14:39.277 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65266 00:14:39.277 [2024-07-25 02:39:26.130751] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.538 02:39:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:14:39.538 00:14:39.538 real 0m10.290s 00:14:39.538 user 0m17.447s 00:14:39.538 sys 0m2.496s 00:14:39.538 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.538 02:39:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:39.538 ************************************ 00:14:39.538 END TEST raid_superblock_test_4k 00:14:39.538 ************************************ 00:14:39.538 02:39:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:39.538 02:39:26 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:14:39.538 02:39:26 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:14:39.538 02:39:26 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:14:39.538 02:39:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:39.538 02:39:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.538 02:39:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.538 ************************************ 00:14:39.538 START TEST raid_state_function_test_sb_md_separate 00:14:39.538 ************************************ 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=65645 00:14:39.538 Process raid pid: 65645 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65645' 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 65645 /var/tmp/spdk-raid.sock 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 65645 ']' 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.538 02:39:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:39.538 [2024-07-25 02:39:26.393194] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:39.538 [2024-07-25 02:39:26.393484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:40.477 EAL: TSC is not safe to use in SMP mode 00:14:40.477 EAL: TSC is not invariant 00:14:40.477 [2024-07-25 02:39:27.130878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.477 [2024-07-25 02:39:27.223835] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:40.477 [2024-07-25 02:39:27.225515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.477 [2024-07-25 02:39:27.226091] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.477 [2024-07-25 02:39:27.226103] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.477 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.477 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:14:40.477 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:40.737 [2024-07-25 02:39:27.460923] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.737 [2024-07-25 02:39:27.460955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.737 [2024-07-25 02:39:27.460959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.737 [2024-07-25 02:39:27.460964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.737 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.996 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.996 "name": "Existed_Raid", 00:14:40.996 "uuid": "1c8bae16-4a2f-11ef-9c8e-7947904e2597", 00:14:40.996 "strip_size_kb": 0, 00:14:40.996 "state": "configuring", 00:14:40.996 "raid_level": "raid1", 00:14:40.996 "superblock": true, 00:14:40.996 "num_base_bdevs": 2, 00:14:40.996 "num_base_bdevs_discovered": 0, 00:14:40.996 "num_base_bdevs_operational": 2, 00:14:40.996 "base_bdevs_list": [ 00:14:40.996 { 00:14:40.996 "name": "BaseBdev1", 00:14:40.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.996 "is_configured": false, 00:14:40.996 "data_offset": 0, 00:14:40.996 "data_size": 0 00:14:40.996 }, 00:14:40.996 { 00:14:40.996 "name": "BaseBdev2", 00:14:40.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.996 "is_configured": false, 00:14:40.996 "data_offset": 0, 00:14:40.996 "data_size": 0 00:14:40.996 } 00:14:40.996 ] 00:14:40.996 }' 00:14:40.996 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.996 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:41.256 02:39:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:41.256 [2024-07-25 02:39:28.132957] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.256 [2024-07-25 02:39:28.132970] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ed36c634500 name Existed_Raid, state configuring 00:14:41.256 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:41.516 [2024-07-25 02:39:28.312980] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.516 [2024-07-25 02:39:28.313003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.516 [2024-07-25 02:39:28.313006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.516 [2024-07-25 02:39:28.313012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.516 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:14:41.776 [2024-07-25 02:39:28.493693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.776 BaseBdev1 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:41.776 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.035 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.035 [ 00:14:42.035 { 00:14:42.035 "name": "BaseBdev1", 00:14:42.035 "aliases": [ 00:14:42.035 "1d2929c2-4a2f-11ef-9c8e-7947904e2597" 00:14:42.035 ], 00:14:42.035 "product_name": "Malloc disk", 00:14:42.035 "block_size": 4096, 00:14:42.035 "num_blocks": 8192, 00:14:42.035 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:42.035 "md_size": 32, 00:14:42.035 "md_interleave": false, 00:14:42.035 "dif_type": 0, 00:14:42.035 "assigned_rate_limits": { 00:14:42.035 "rw_ios_per_sec": 0, 00:14:42.035 "rw_mbytes_per_sec": 0, 00:14:42.035 "r_mbytes_per_sec": 0, 00:14:42.035 "w_mbytes_per_sec": 0 00:14:42.035 }, 00:14:42.035 "claimed": true, 00:14:42.035 "claim_type": "exclusive_write", 00:14:42.035 "zoned": false, 00:14:42.035 "supported_io_types": { 00:14:42.035 "read": true, 00:14:42.036 "write": true, 00:14:42.036 "unmap": true, 00:14:42.036 "flush": true, 00:14:42.036 "reset": true, 00:14:42.036 "nvme_admin": false, 00:14:42.036 "nvme_io": false, 00:14:42.036 "nvme_io_md": false, 00:14:42.036 "write_zeroes": true, 00:14:42.036 "zcopy": true, 00:14:42.036 "get_zone_info": false, 00:14:42.036 "zone_management": false, 00:14:42.036 "zone_append": false, 00:14:42.036 "compare": false, 00:14:42.036 "compare_and_write": false, 00:14:42.036 "abort": true, 00:14:42.036 "seek_hole": false, 00:14:42.036 "seek_data": false, 00:14:42.036 "copy": true, 00:14:42.036 "nvme_iov_md": false 00:14:42.036 }, 00:14:42.036 "memory_domains": [ 00:14:42.036 { 00:14:42.036 "dma_device_id": "system", 00:14:42.036 "dma_device_type": 1 00:14:42.036 }, 00:14:42.036 { 00:14:42.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.036 "dma_device_type": 2 00:14:42.036 } 00:14:42.036 ], 00:14:42.036 "driver_specific": {} 00:14:42.036 } 00:14:42.036 ] 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.036 02:39:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.296 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.296 "name": "Existed_Raid", 00:14:42.296 "uuid": "1d0db1ac-4a2f-11ef-9c8e-7947904e2597", 00:14:42.296 "strip_size_kb": 0, 00:14:42.296 "state": "configuring", 00:14:42.296 "raid_level": "raid1", 00:14:42.296 "superblock": true, 00:14:42.296 "num_base_bdevs": 2, 00:14:42.296 "num_base_bdevs_discovered": 1, 00:14:42.296 "num_base_bdevs_operational": 2, 00:14:42.296 "base_bdevs_list": [ 00:14:42.296 { 00:14:42.296 "name": "BaseBdev1", 00:14:42.296 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:42.296 "is_configured": true, 00:14:42.296 "data_offset": 256, 00:14:42.296 "data_size": 7936 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "name": "BaseBdev2", 00:14:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.296 "is_configured": false, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 0 00:14:42.296 } 00:14:42.296 ] 00:14:42.296 }' 00:14:42.296 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.296 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:42.556 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.816 [2024-07-25 02:39:29.529096] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.816 [2024-07-25 02:39:29.529119] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ed36c634500 name Existed_Raid, state configuring 00:14:42.816 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:42.816 [2024-07-25 02:39:29.697122] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.816 [2024-07-25 02:39:29.697736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.816 [2024-07-25 02:39:29.697769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.076 "name": "Existed_Raid", 00:14:43.076 "uuid": "1de0e5a2-4a2f-11ef-9c8e-7947904e2597", 00:14:43.076 "strip_size_kb": 0, 00:14:43.076 "state": "configuring", 00:14:43.076 "raid_level": "raid1", 00:14:43.076 "superblock": true, 00:14:43.076 "num_base_bdevs": 2, 00:14:43.076 "num_base_bdevs_discovered": 1, 00:14:43.076 "num_base_bdevs_operational": 2, 00:14:43.076 "base_bdevs_list": [ 00:14:43.076 { 00:14:43.076 "name": "BaseBdev1", 00:14:43.076 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:43.076 "is_configured": true, 00:14:43.076 "data_offset": 256, 00:14:43.076 "data_size": 7936 00:14:43.076 }, 00:14:43.076 { 00:14:43.076 "name": "BaseBdev2", 00:14:43.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.076 "is_configured": false, 00:14:43.076 "data_offset": 0, 00:14:43.076 "data_size": 0 00:14:43.076 } 00:14:43.076 ] 00:14:43.076 }' 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.076 02:39:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:43.336 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:14:43.595 [2024-07-25 02:39:30.353229] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.595 [2024-07-25 02:39:30.353264] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2ed36c634a00 00:14:43.595 [2024-07-25 02:39:30.353268] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:43.595 [2024-07-25 02:39:30.353284] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2ed36c697e20 00:14:43.595 [2024-07-25 02:39:30.353306] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2ed36c634a00 00:14:43.595 [2024-07-25 02:39:30.353308] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2ed36c634a00 00:14:43.595 [2024-07-25 02:39:30.353318] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.595 BaseBdev2 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.595 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.856 [ 00:14:43.856 { 00:14:43.856 "name": "BaseBdev2", 00:14:43.856 "aliases": [ 00:14:43.856 "1e4500ed-4a2f-11ef-9c8e-7947904e2597" 00:14:43.856 ], 00:14:43.856 "product_name": "Malloc disk", 00:14:43.856 "block_size": 4096, 00:14:43.856 "num_blocks": 8192, 00:14:43.856 "uuid": "1e4500ed-4a2f-11ef-9c8e-7947904e2597", 00:14:43.856 "md_size": 32, 00:14:43.856 "md_interleave": false, 00:14:43.856 "dif_type": 0, 00:14:43.856 "assigned_rate_limits": { 00:14:43.856 "rw_ios_per_sec": 0, 00:14:43.856 "rw_mbytes_per_sec": 0, 00:14:43.856 "r_mbytes_per_sec": 0, 00:14:43.856 "w_mbytes_per_sec": 0 00:14:43.856 }, 00:14:43.856 "claimed": true, 00:14:43.856 "claim_type": "exclusive_write", 00:14:43.856 "zoned": false, 00:14:43.856 "supported_io_types": { 00:14:43.856 "read": true, 00:14:43.856 "write": true, 00:14:43.856 "unmap": true, 00:14:43.856 "flush": true, 00:14:43.856 "reset": true, 00:14:43.856 "nvme_admin": false, 00:14:43.856 "nvme_io": false, 00:14:43.856 "nvme_io_md": false, 00:14:43.856 "write_zeroes": true, 00:14:43.856 "zcopy": true, 00:14:43.856 "get_zone_info": false, 00:14:43.856 "zone_management": false, 00:14:43.856 "zone_append": false, 00:14:43.856 "compare": false, 00:14:43.856 "compare_and_write": false, 00:14:43.856 "abort": true, 00:14:43.856 "seek_hole": false, 00:14:43.856 "seek_data": false, 00:14:43.856 "copy": true, 00:14:43.856 "nvme_iov_md": false 00:14:43.856 }, 00:14:43.856 "memory_domains": [ 00:14:43.856 { 00:14:43.856 "dma_device_id": "system", 00:14:43.856 "dma_device_type": 1 00:14:43.856 }, 00:14:43.856 { 00:14:43.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.856 "dma_device_type": 2 00:14:43.856 } 00:14:43.856 ], 00:14:43.856 "driver_specific": {} 00:14:43.856 } 00:14:43.856 ] 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.856 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.116 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.116 "name": "Existed_Raid", 00:14:44.116 "uuid": "1de0e5a2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.116 "strip_size_kb": 0, 00:14:44.116 "state": "online", 00:14:44.116 "raid_level": "raid1", 00:14:44.116 "superblock": true, 00:14:44.116 "num_base_bdevs": 2, 00:14:44.116 "num_base_bdevs_discovered": 2, 00:14:44.116 "num_base_bdevs_operational": 2, 00:14:44.116 "base_bdevs_list": [ 00:14:44.116 { 00:14:44.116 "name": "BaseBdev1", 00:14:44.116 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.116 "is_configured": true, 00:14:44.116 "data_offset": 256, 00:14:44.116 "data_size": 7936 00:14:44.116 }, 00:14:44.116 { 00:14:44.116 "name": "BaseBdev2", 00:14:44.116 "uuid": "1e4500ed-4a2f-11ef-9c8e-7947904e2597", 00:14:44.116 "is_configured": true, 00:14:44.116 "data_offset": 256, 00:14:44.116 "data_size": 7936 00:14:44.116 } 00:14:44.116 ] 00:14:44.116 }' 00:14:44.116 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.116 02:39:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:44.376 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:44.635 [2024-07-25 02:39:31.369288] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.635 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:44.635 "name": "Existed_Raid", 00:14:44.635 "aliases": [ 00:14:44.635 "1de0e5a2-4a2f-11ef-9c8e-7947904e2597" 00:14:44.635 ], 00:14:44.635 "product_name": "Raid Volume", 00:14:44.635 "block_size": 4096, 00:14:44.635 "num_blocks": 7936, 00:14:44.635 "uuid": "1de0e5a2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.635 "md_size": 32, 00:14:44.635 "md_interleave": false, 00:14:44.635 "dif_type": 0, 00:14:44.635 "assigned_rate_limits": { 00:14:44.635 "rw_ios_per_sec": 0, 00:14:44.635 "rw_mbytes_per_sec": 0, 00:14:44.635 "r_mbytes_per_sec": 0, 00:14:44.635 "w_mbytes_per_sec": 0 00:14:44.635 }, 00:14:44.635 "claimed": false, 00:14:44.635 "zoned": false, 00:14:44.635 "supported_io_types": { 00:14:44.635 "read": true, 00:14:44.635 "write": true, 00:14:44.635 "unmap": false, 00:14:44.635 "flush": false, 00:14:44.635 "reset": true, 00:14:44.635 "nvme_admin": false, 00:14:44.635 "nvme_io": false, 00:14:44.635 "nvme_io_md": false, 00:14:44.635 "write_zeroes": true, 00:14:44.635 "zcopy": false, 00:14:44.635 "get_zone_info": false, 00:14:44.635 "zone_management": false, 00:14:44.635 "zone_append": false, 00:14:44.635 "compare": false, 00:14:44.635 "compare_and_write": false, 00:14:44.635 "abort": false, 00:14:44.635 "seek_hole": false, 00:14:44.635 "seek_data": false, 00:14:44.636 "copy": false, 00:14:44.636 "nvme_iov_md": false 00:14:44.636 }, 00:14:44.636 "memory_domains": [ 00:14:44.636 { 00:14:44.636 "dma_device_id": "system", 00:14:44.636 "dma_device_type": 1 00:14:44.636 }, 00:14:44.636 { 00:14:44.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.636 "dma_device_type": 2 00:14:44.636 }, 00:14:44.636 { 00:14:44.636 "dma_device_id": "system", 00:14:44.636 "dma_device_type": 1 00:14:44.636 }, 00:14:44.636 { 00:14:44.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.636 "dma_device_type": 2 00:14:44.636 } 00:14:44.636 ], 00:14:44.636 "driver_specific": { 00:14:44.636 "raid": { 00:14:44.636 "uuid": "1de0e5a2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.636 "strip_size_kb": 0, 00:14:44.636 "state": "online", 00:14:44.636 "raid_level": "raid1", 00:14:44.636 "superblock": true, 00:14:44.636 "num_base_bdevs": 2, 00:14:44.636 "num_base_bdevs_discovered": 2, 00:14:44.636 "num_base_bdevs_operational": 2, 00:14:44.636 "base_bdevs_list": [ 00:14:44.636 { 00:14:44.636 "name": "BaseBdev1", 00:14:44.636 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.636 "is_configured": true, 00:14:44.636 "data_offset": 256, 00:14:44.636 "data_size": 7936 00:14:44.636 }, 00:14:44.636 { 00:14:44.636 "name": "BaseBdev2", 00:14:44.636 "uuid": "1e4500ed-4a2f-11ef-9c8e-7947904e2597", 00:14:44.636 "is_configured": true, 00:14:44.636 "data_offset": 256, 00:14:44.636 "data_size": 7936 00:14:44.636 } 00:14:44.636 ] 00:14:44.636 } 00:14:44.636 } 00:14:44.636 }' 00:14:44.636 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.636 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:44.636 BaseBdev2' 00:14:44.636 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.636 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:44.636 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.895 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.895 "name": "BaseBdev1", 00:14:44.895 "aliases": [ 00:14:44.895 "1d2929c2-4a2f-11ef-9c8e-7947904e2597" 00:14:44.895 ], 00:14:44.895 "product_name": "Malloc disk", 00:14:44.895 "block_size": 4096, 00:14:44.895 "num_blocks": 8192, 00:14:44.895 "uuid": "1d2929c2-4a2f-11ef-9c8e-7947904e2597", 00:14:44.895 "md_size": 32, 00:14:44.895 "md_interleave": false, 00:14:44.895 "dif_type": 0, 00:14:44.895 "assigned_rate_limits": { 00:14:44.895 "rw_ios_per_sec": 0, 00:14:44.895 "rw_mbytes_per_sec": 0, 00:14:44.895 "r_mbytes_per_sec": 0, 00:14:44.895 "w_mbytes_per_sec": 0 00:14:44.895 }, 00:14:44.895 "claimed": true, 00:14:44.895 "claim_type": "exclusive_write", 00:14:44.895 "zoned": false, 00:14:44.895 "supported_io_types": { 00:14:44.895 "read": true, 00:14:44.895 "write": true, 00:14:44.895 "unmap": true, 00:14:44.895 "flush": true, 00:14:44.895 "reset": true, 00:14:44.895 "nvme_admin": false, 00:14:44.895 "nvme_io": false, 00:14:44.895 "nvme_io_md": false, 00:14:44.895 "write_zeroes": true, 00:14:44.895 "zcopy": true, 00:14:44.895 "get_zone_info": false, 00:14:44.895 "zone_management": false, 00:14:44.895 "zone_append": false, 00:14:44.895 "compare": false, 00:14:44.896 "compare_and_write": false, 00:14:44.896 "abort": true, 00:14:44.896 "seek_hole": false, 00:14:44.896 "seek_data": false, 00:14:44.896 "copy": true, 00:14:44.896 "nvme_iov_md": false 00:14:44.896 }, 00:14:44.896 "memory_domains": [ 00:14:44.896 { 00:14:44.896 "dma_device_id": "system", 00:14:44.896 "dma_device_type": 1 00:14:44.896 }, 00:14:44.896 { 00:14:44.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.896 "dma_device_type": 2 00:14:44.896 } 00:14:44.896 ], 00:14:44.896 "driver_specific": {} 00:14:44.896 }' 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:44.896 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:45.175 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:45.175 "name": "BaseBdev2", 00:14:45.175 "aliases": [ 00:14:45.175 "1e4500ed-4a2f-11ef-9c8e-7947904e2597" 00:14:45.175 ], 00:14:45.175 "product_name": "Malloc disk", 00:14:45.175 "block_size": 4096, 00:14:45.175 "num_blocks": 8192, 00:14:45.175 "uuid": "1e4500ed-4a2f-11ef-9c8e-7947904e2597", 00:14:45.175 "md_size": 32, 00:14:45.175 "md_interleave": false, 00:14:45.175 "dif_type": 0, 00:14:45.175 "assigned_rate_limits": { 00:14:45.175 "rw_ios_per_sec": 0, 00:14:45.175 "rw_mbytes_per_sec": 0, 00:14:45.175 "r_mbytes_per_sec": 0, 00:14:45.176 "w_mbytes_per_sec": 0 00:14:45.176 }, 00:14:45.176 "claimed": true, 00:14:45.176 "claim_type": "exclusive_write", 00:14:45.176 "zoned": false, 00:14:45.176 "supported_io_types": { 00:14:45.176 "read": true, 00:14:45.176 "write": true, 00:14:45.176 "unmap": true, 00:14:45.176 "flush": true, 00:14:45.176 "reset": true, 00:14:45.176 "nvme_admin": false, 00:14:45.176 "nvme_io": false, 00:14:45.176 "nvme_io_md": false, 00:14:45.176 "write_zeroes": true, 00:14:45.176 "zcopy": true, 00:14:45.176 "get_zone_info": false, 00:14:45.176 "zone_management": false, 00:14:45.176 "zone_append": false, 00:14:45.176 "compare": false, 00:14:45.176 "compare_and_write": false, 00:14:45.176 "abort": true, 00:14:45.176 "seek_hole": false, 00:14:45.176 "seek_data": false, 00:14:45.176 "copy": true, 00:14:45.176 "nvme_iov_md": false 00:14:45.176 }, 00:14:45.176 "memory_domains": [ 00:14:45.176 { 00:14:45.176 "dma_device_id": "system", 00:14:45.176 "dma_device_type": 1 00:14:45.176 }, 00:14:45.176 { 00:14:45.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.176 "dma_device_type": 2 00:14:45.176 } 00:14:45.176 ], 00:14:45.176 "driver_specific": {} 00:14:45.176 }' 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:45.176 02:39:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:45.458 [2024-07-25 02:39:32.165339] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.459 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.734 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.734 "name": "Existed_Raid", 00:14:45.734 "uuid": "1de0e5a2-4a2f-11ef-9c8e-7947904e2597", 00:14:45.734 "strip_size_kb": 0, 00:14:45.734 "state": "online", 00:14:45.734 "raid_level": "raid1", 00:14:45.734 "superblock": true, 00:14:45.734 "num_base_bdevs": 2, 00:14:45.734 "num_base_bdevs_discovered": 1, 00:14:45.734 "num_base_bdevs_operational": 1, 00:14:45.734 "base_bdevs_list": [ 00:14:45.734 { 00:14:45.734 "name": null, 00:14:45.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.734 "is_configured": false, 00:14:45.734 "data_offset": 256, 00:14:45.734 "data_size": 7936 00:14:45.734 }, 00:14:45.734 { 00:14:45.734 "name": "BaseBdev2", 00:14:45.734 "uuid": "1e4500ed-4a2f-11ef-9c8e-7947904e2597", 00:14:45.734 "is_configured": true, 00:14:45.734 "data_offset": 256, 00:14:45.734 "data_size": 7936 00:14:45.734 } 00:14:45.734 ] 00:14:45.734 }' 00:14:45.734 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.734 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:45.993 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:45.993 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:45.993 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.994 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:45.994 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:45.994 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.994 02:39:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:46.253 [2024-07-25 02:39:33.006177] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.253 [2024-07-25 02:39:33.006201] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.253 [2024-07-25 02:39:33.010961] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.253 [2024-07-25 02:39:33.010972] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.253 [2024-07-25 02:39:33.010975] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2ed36c634a00 name Existed_Raid, state offline 00:14:46.253 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.253 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.253 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.253 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.512 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 65645 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 65645 ']' 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 65645 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 65645 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:46.513 killing process with pid 65645 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65645' 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 65645 00:14:46.513 [2024-07-25 02:39:33.241515] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.513 [2024-07-25 02:39:33.241546] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 65645 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:14:46.513 00:14:46.513 real 0m7.042s 00:14:46.513 user 0m11.556s 00:14:46.513 sys 0m1.850s 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.513 02:39:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.772 ************************************ 00:14:46.772 END TEST raid_state_function_test_sb_md_separate 00:14:46.772 ************************************ 00:14:46.772 02:39:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:46.772 02:39:33 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:14:46.772 02:39:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:46.772 02:39:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.772 02:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.772 ************************************ 00:14:46.772 START TEST raid_superblock_test_md_separate 00:14:46.772 ************************************ 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=65907 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 65907 /var/tmp/spdk-raid.sock 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 65907 ']' 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.772 02:39:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:46.772 [2024-07-25 02:39:33.500246] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:46.772 [2024-07-25 02:39:33.500641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:47.342 EAL: TSC is not safe to use in SMP mode 00:14:47.342 EAL: TSC is not invariant 00:14:47.602 [2024-07-25 02:39:34.388688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.602 [2024-07-25 02:39:34.467257] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:47.602 [2024-07-25 02:39:34.468697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.602 [2024-07-25 02:39:34.469246] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.602 [2024-07-25 02:39:34.469258] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:14:48.541 malloc1 00:14:48.541 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:48.541 [2024-07-25 02:39:35.420138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:48.541 [2024-07-25 02:39:35.420169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.541 [2024-07-25 02:39:35.420176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34780 00:14:48.541 [2024-07-25 02:39:35.420182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.541 [2024-07-25 02:39:35.420689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.541 [2024-07-25 02:39:35.420720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:48.541 pt1 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:14:48.800 malloc2 00:14:48.800 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.060 [2024-07-25 02:39:35.792179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.060 [2024-07-25 02:39:35.792209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.060 [2024-07-25 02:39:35.792217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34c80 00:14:49.060 [2024-07-25 02:39:35.792222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.060 [2024-07-25 02:39:35.792521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.060 [2024-07-25 02:39:35.792554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.060 pt2 00:14:49.060 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:49.060 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:49.060 02:39:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:49.320 [2024-07-25 02:39:35.984208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:49.320 [2024-07-25 02:39:35.984422] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.320 [2024-07-25 02:39:35.984461] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f670ba34f00 00:14:49.320 [2024-07-25 02:39:35.984465] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:49.320 [2024-07-25 02:39:35.984497] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f670ba97e20 00:14:49.320 [2024-07-25 02:39:35.984519] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f670ba34f00 00:14:49.320 [2024-07-25 02:39:35.984521] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f670ba34f00 00:14:49.320 [2024-07-25 02:39:35.984531] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.320 "name": "raid_bdev1", 00:14:49.320 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:49.320 "strip_size_kb": 0, 00:14:49.320 "state": "online", 00:14:49.320 "raid_level": "raid1", 00:14:49.320 "superblock": true, 00:14:49.320 "num_base_bdevs": 2, 00:14:49.320 "num_base_bdevs_discovered": 2, 00:14:49.320 "num_base_bdevs_operational": 2, 00:14:49.320 "base_bdevs_list": [ 00:14:49.320 { 00:14:49.320 "name": "pt1", 00:14:49.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.320 "is_configured": true, 00:14:49.320 "data_offset": 256, 00:14:49.320 "data_size": 7936 00:14:49.320 }, 00:14:49.320 { 00:14:49.320 "name": "pt2", 00:14:49.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.320 "is_configured": true, 00:14:49.320 "data_offset": 256, 00:14:49.320 "data_size": 7936 00:14:49.320 } 00:14:49.320 ] 00:14:49.320 }' 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.320 02:39:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:49.580 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:49.580 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:49.840 [2024-07-25 02:39:36.648300] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:49.840 "name": "raid_bdev1", 00:14:49.840 "aliases": [ 00:14:49.840 "21a03afa-4a2f-11ef-9c8e-7947904e2597" 00:14:49.840 ], 00:14:49.840 "product_name": "Raid Volume", 00:14:49.840 "block_size": 4096, 00:14:49.840 "num_blocks": 7936, 00:14:49.840 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:49.840 "md_size": 32, 00:14:49.840 "md_interleave": false, 00:14:49.840 "dif_type": 0, 00:14:49.840 "assigned_rate_limits": { 00:14:49.840 "rw_ios_per_sec": 0, 00:14:49.840 "rw_mbytes_per_sec": 0, 00:14:49.840 "r_mbytes_per_sec": 0, 00:14:49.840 "w_mbytes_per_sec": 0 00:14:49.840 }, 00:14:49.840 "claimed": false, 00:14:49.840 "zoned": false, 00:14:49.840 "supported_io_types": { 00:14:49.840 "read": true, 00:14:49.840 "write": true, 00:14:49.840 "unmap": false, 00:14:49.840 "flush": false, 00:14:49.840 "reset": true, 00:14:49.840 "nvme_admin": false, 00:14:49.840 "nvme_io": false, 00:14:49.840 "nvme_io_md": false, 00:14:49.840 "write_zeroes": true, 00:14:49.840 "zcopy": false, 00:14:49.840 "get_zone_info": false, 00:14:49.840 "zone_management": false, 00:14:49.840 "zone_append": false, 00:14:49.840 "compare": false, 00:14:49.840 "compare_and_write": false, 00:14:49.840 "abort": false, 00:14:49.840 "seek_hole": false, 00:14:49.840 "seek_data": false, 00:14:49.840 "copy": false, 00:14:49.840 "nvme_iov_md": false 00:14:49.840 }, 00:14:49.840 "memory_domains": [ 00:14:49.840 { 00:14:49.840 "dma_device_id": "system", 00:14:49.840 "dma_device_type": 1 00:14:49.840 }, 00:14:49.840 { 00:14:49.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.840 "dma_device_type": 2 00:14:49.840 }, 00:14:49.840 { 00:14:49.840 "dma_device_id": "system", 00:14:49.840 "dma_device_type": 1 00:14:49.840 }, 00:14:49.840 { 00:14:49.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.840 "dma_device_type": 2 00:14:49.840 } 00:14:49.840 ], 00:14:49.840 "driver_specific": { 00:14:49.840 "raid": { 00:14:49.840 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:49.840 "strip_size_kb": 0, 00:14:49.840 "state": "online", 00:14:49.840 "raid_level": "raid1", 00:14:49.840 "superblock": true, 00:14:49.840 "num_base_bdevs": 2, 00:14:49.840 "num_base_bdevs_discovered": 2, 00:14:49.840 "num_base_bdevs_operational": 2, 00:14:49.840 "base_bdevs_list": [ 00:14:49.840 { 00:14:49.840 "name": "pt1", 00:14:49.840 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.840 "is_configured": true, 00:14:49.840 "data_offset": 256, 00:14:49.840 "data_size": 7936 00:14:49.840 }, 00:14:49.840 { 00:14:49.840 "name": "pt2", 00:14:49.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.840 "is_configured": true, 00:14:49.840 "data_offset": 256, 00:14:49.840 "data_size": 7936 00:14:49.840 } 00:14:49.840 ] 00:14:49.840 } 00:14:49.840 } 00:14:49.840 }' 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:49.840 pt2' 00:14:49.840 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:49.841 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:49.841 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.101 "name": "pt1", 00:14:50.101 "aliases": [ 00:14:50.101 "00000000-0000-0000-0000-000000000001" 00:14:50.101 ], 00:14:50.101 "product_name": "passthru", 00:14:50.101 "block_size": 4096, 00:14:50.101 "num_blocks": 8192, 00:14:50.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.101 "md_size": 32, 00:14:50.101 "md_interleave": false, 00:14:50.101 "dif_type": 0, 00:14:50.101 "assigned_rate_limits": { 00:14:50.101 "rw_ios_per_sec": 0, 00:14:50.101 "rw_mbytes_per_sec": 0, 00:14:50.101 "r_mbytes_per_sec": 0, 00:14:50.101 "w_mbytes_per_sec": 0 00:14:50.101 }, 00:14:50.101 "claimed": true, 00:14:50.101 "claim_type": "exclusive_write", 00:14:50.101 "zoned": false, 00:14:50.101 "supported_io_types": { 00:14:50.101 "read": true, 00:14:50.101 "write": true, 00:14:50.101 "unmap": true, 00:14:50.101 "flush": true, 00:14:50.101 "reset": true, 00:14:50.101 "nvme_admin": false, 00:14:50.101 "nvme_io": false, 00:14:50.101 "nvme_io_md": false, 00:14:50.101 "write_zeroes": true, 00:14:50.101 "zcopy": true, 00:14:50.101 "get_zone_info": false, 00:14:50.101 "zone_management": false, 00:14:50.101 "zone_append": false, 00:14:50.101 "compare": false, 00:14:50.101 "compare_and_write": false, 00:14:50.101 "abort": true, 00:14:50.101 "seek_hole": false, 00:14:50.101 "seek_data": false, 00:14:50.101 "copy": true, 00:14:50.101 "nvme_iov_md": false 00:14:50.101 }, 00:14:50.101 "memory_domains": [ 00:14:50.101 { 00:14:50.101 "dma_device_id": "system", 00:14:50.101 "dma_device_type": 1 00:14:50.101 }, 00:14:50.101 { 00:14:50.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.101 "dma_device_type": 2 00:14:50.101 } 00:14:50.101 ], 00:14:50.101 "driver_specific": { 00:14:50.101 "passthru": { 00:14:50.101 "name": "pt1", 00:14:50.101 "base_bdev_name": "malloc1" 00:14:50.101 } 00:14:50.101 } 00:14:50.101 }' 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:50.101 02:39:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.361 "name": "pt2", 00:14:50.361 "aliases": [ 00:14:50.361 "00000000-0000-0000-0000-000000000002" 00:14:50.361 ], 00:14:50.361 "product_name": "passthru", 00:14:50.361 "block_size": 4096, 00:14:50.361 "num_blocks": 8192, 00:14:50.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.361 "md_size": 32, 00:14:50.361 "md_interleave": false, 00:14:50.361 "dif_type": 0, 00:14:50.361 "assigned_rate_limits": { 00:14:50.361 "rw_ios_per_sec": 0, 00:14:50.361 "rw_mbytes_per_sec": 0, 00:14:50.361 "r_mbytes_per_sec": 0, 00:14:50.361 "w_mbytes_per_sec": 0 00:14:50.361 }, 00:14:50.361 "claimed": true, 00:14:50.361 "claim_type": "exclusive_write", 00:14:50.361 "zoned": false, 00:14:50.361 "supported_io_types": { 00:14:50.361 "read": true, 00:14:50.361 "write": true, 00:14:50.361 "unmap": true, 00:14:50.361 "flush": true, 00:14:50.361 "reset": true, 00:14:50.361 "nvme_admin": false, 00:14:50.361 "nvme_io": false, 00:14:50.361 "nvme_io_md": false, 00:14:50.361 "write_zeroes": true, 00:14:50.361 "zcopy": true, 00:14:50.361 "get_zone_info": false, 00:14:50.361 "zone_management": false, 00:14:50.361 "zone_append": false, 00:14:50.361 "compare": false, 00:14:50.361 "compare_and_write": false, 00:14:50.361 "abort": true, 00:14:50.361 "seek_hole": false, 00:14:50.361 "seek_data": false, 00:14:50.361 "copy": true, 00:14:50.361 "nvme_iov_md": false 00:14:50.361 }, 00:14:50.361 "memory_domains": [ 00:14:50.361 { 00:14:50.361 "dma_device_id": "system", 00:14:50.361 "dma_device_type": 1 00:14:50.361 }, 00:14:50.361 { 00:14:50.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.361 "dma_device_type": 2 00:14:50.361 } 00:14:50.361 ], 00:14:50.361 "driver_specific": { 00:14:50.361 "passthru": { 00:14:50.361 "name": "pt2", 00:14:50.361 "base_bdev_name": "malloc2" 00:14:50.361 } 00:14:50.361 } 00:14:50.361 }' 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:50.361 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:50.620 [2024-07-25 02:39:37.400376] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.620 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=21a03afa-4a2f-11ef-9c8e-7947904e2597 00:14:50.620 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 21a03afa-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:50.620 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:50.880 [2024-07-25 02:39:37.584366] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.880 [2024-07-25 02:39:37.584376] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.880 [2024-07-25 02:39:37.584389] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.880 [2024-07-25 02:39:37.584400] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.880 [2024-07-25 02:39:37.584403] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba34f00 name raid_bdev1, state offline 00:14:50.880 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:50.880 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:51.140 02:39:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:51.399 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:51.399 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:51.658 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:51.658 [2024-07-25 02:39:38.508461] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:51.658 [2024-07-25 02:39:38.508889] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:51.658 [2024-07-25 02:39:38.508911] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:51.658 [2024-07-25 02:39:38.508936] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:51.659 [2024-07-25 02:39:38.508944] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.659 [2024-07-25 02:39:38.508947] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba34c80 name raid_bdev1, state configuring 00:14:51.659 request: 00:14:51.659 { 00:14:51.659 "name": "raid_bdev1", 00:14:51.659 "raid_level": "raid1", 00:14:51.659 "base_bdevs": [ 00:14:51.659 "malloc1", 00:14:51.659 "malloc2" 00:14:51.659 ], 00:14:51.659 "superblock": false, 00:14:51.659 "method": "bdev_raid_create", 00:14:51.659 "req_id": 1 00:14:51.659 } 00:14:51.659 Got JSON-RPC error response 00:14:51.659 response: 00:14:51.659 { 00:14:51.659 "code": -17, 00:14:51.659 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:51.659 } 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.659 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:51.916 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:51.916 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:51.916 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.175 [2024-07-25 02:39:38.864488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.175 [2024-07-25 02:39:38.864513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.175 [2024-07-25 02:39:38.864520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34780 00:14:52.175 [2024-07-25 02:39:38.864526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.175 [2024-07-25 02:39:38.864803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.175 [2024-07-25 02:39:38.864821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.175 [2024-07-25 02:39:38.864835] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:52.175 [2024-07-25 02:39:38.864842] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.175 pt1 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.175 02:39:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.175 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.175 "name": "raid_bdev1", 00:14:52.175 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:52.175 "strip_size_kb": 0, 00:14:52.175 "state": "configuring", 00:14:52.175 "raid_level": "raid1", 00:14:52.175 "superblock": true, 00:14:52.175 "num_base_bdevs": 2, 00:14:52.175 "num_base_bdevs_discovered": 1, 00:14:52.175 "num_base_bdevs_operational": 2, 00:14:52.175 "base_bdevs_list": [ 00:14:52.175 { 00:14:52.175 "name": "pt1", 00:14:52.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.175 "is_configured": true, 00:14:52.175 "data_offset": 256, 00:14:52.175 "data_size": 7936 00:14:52.175 }, 00:14:52.175 { 00:14:52.175 "name": null, 00:14:52.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.175 "is_configured": false, 00:14:52.175 "data_offset": 256, 00:14:52.175 "data_size": 7936 00:14:52.175 } 00:14:52.175 ] 00:14:52.175 }' 00:14:52.175 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.175 02:39:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.743 [2024-07-25 02:39:39.524561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.743 [2024-07-25 02:39:39.524591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.743 [2024-07-25 02:39:39.524598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34f00 00:14:52.743 [2024-07-25 02:39:39.524604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.743 [2024-07-25 02:39:39.524647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.743 [2024-07-25 02:39:39.524653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.743 [2024-07-25 02:39:39.524666] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:52.743 [2024-07-25 02:39:39.524671] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.743 [2024-07-25 02:39:39.524682] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f670ba35180 00:14:52.743 [2024-07-25 02:39:39.524684] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:52.743 [2024-07-25 02:39:39.524698] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f670ba97e20 00:14:52.743 [2024-07-25 02:39:39.524715] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f670ba35180 00:14:52.743 [2024-07-25 02:39:39.524718] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f670ba35180 00:14:52.743 [2024-07-25 02:39:39.524729] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.743 pt2 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.743 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.002 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.002 "name": "raid_bdev1", 00:14:53.002 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:53.002 "strip_size_kb": 0, 00:14:53.002 "state": "online", 00:14:53.002 "raid_level": "raid1", 00:14:53.002 "superblock": true, 00:14:53.002 "num_base_bdevs": 2, 00:14:53.002 "num_base_bdevs_discovered": 2, 00:14:53.002 "num_base_bdevs_operational": 2, 00:14:53.002 "base_bdevs_list": [ 00:14:53.002 { 00:14:53.002 "name": "pt1", 00:14:53.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.002 "is_configured": true, 00:14:53.002 "data_offset": 256, 00:14:53.002 "data_size": 7936 00:14:53.002 }, 00:14:53.002 { 00:14:53.002 "name": "pt2", 00:14:53.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.002 "is_configured": true, 00:14:53.002 "data_offset": 256, 00:14:53.002 "data_size": 7936 00:14:53.002 } 00:14:53.002 ] 00:14:53.002 }' 00:14:53.002 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.002 02:39:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:53.261 02:39:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:53.261 [2024-07-25 02:39:40.152637] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:53.521 "name": "raid_bdev1", 00:14:53.521 "aliases": [ 00:14:53.521 "21a03afa-4a2f-11ef-9c8e-7947904e2597" 00:14:53.521 ], 00:14:53.521 "product_name": "Raid Volume", 00:14:53.521 "block_size": 4096, 00:14:53.521 "num_blocks": 7936, 00:14:53.521 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:53.521 "md_size": 32, 00:14:53.521 "md_interleave": false, 00:14:53.521 "dif_type": 0, 00:14:53.521 "assigned_rate_limits": { 00:14:53.521 "rw_ios_per_sec": 0, 00:14:53.521 "rw_mbytes_per_sec": 0, 00:14:53.521 "r_mbytes_per_sec": 0, 00:14:53.521 "w_mbytes_per_sec": 0 00:14:53.521 }, 00:14:53.521 "claimed": false, 00:14:53.521 "zoned": false, 00:14:53.521 "supported_io_types": { 00:14:53.521 "read": true, 00:14:53.521 "write": true, 00:14:53.521 "unmap": false, 00:14:53.521 "flush": false, 00:14:53.521 "reset": true, 00:14:53.521 "nvme_admin": false, 00:14:53.521 "nvme_io": false, 00:14:53.521 "nvme_io_md": false, 00:14:53.521 "write_zeroes": true, 00:14:53.521 "zcopy": false, 00:14:53.521 "get_zone_info": false, 00:14:53.521 "zone_management": false, 00:14:53.521 "zone_append": false, 00:14:53.521 "compare": false, 00:14:53.521 "compare_and_write": false, 00:14:53.521 "abort": false, 00:14:53.521 "seek_hole": false, 00:14:53.521 "seek_data": false, 00:14:53.521 "copy": false, 00:14:53.521 "nvme_iov_md": false 00:14:53.521 }, 00:14:53.521 "memory_domains": [ 00:14:53.521 { 00:14:53.521 "dma_device_id": "system", 00:14:53.521 "dma_device_type": 1 00:14:53.521 }, 00:14:53.521 { 00:14:53.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.521 "dma_device_type": 2 00:14:53.521 }, 00:14:53.521 { 00:14:53.521 "dma_device_id": "system", 00:14:53.521 "dma_device_type": 1 00:14:53.521 }, 00:14:53.521 { 00:14:53.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.521 "dma_device_type": 2 00:14:53.521 } 00:14:53.521 ], 00:14:53.521 "driver_specific": { 00:14:53.521 "raid": { 00:14:53.521 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:53.521 "strip_size_kb": 0, 00:14:53.521 "state": "online", 00:14:53.521 "raid_level": "raid1", 00:14:53.521 "superblock": true, 00:14:53.521 "num_base_bdevs": 2, 00:14:53.521 "num_base_bdevs_discovered": 2, 00:14:53.521 "num_base_bdevs_operational": 2, 00:14:53.521 "base_bdevs_list": [ 00:14:53.521 { 00:14:53.521 "name": "pt1", 00:14:53.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.521 "is_configured": true, 00:14:53.521 "data_offset": 256, 00:14:53.521 "data_size": 7936 00:14:53.521 }, 00:14:53.521 { 00:14:53.521 "name": "pt2", 00:14:53.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.521 "is_configured": true, 00:14:53.521 "data_offset": 256, 00:14:53.521 "data_size": 7936 00:14:53.521 } 00:14:53.521 ] 00:14:53.521 } 00:14:53.521 } 00:14:53.521 }' 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:53.521 pt2' 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.521 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.521 "name": "pt1", 00:14:53.521 "aliases": [ 00:14:53.521 "00000000-0000-0000-0000-000000000001" 00:14:53.521 ], 00:14:53.521 "product_name": "passthru", 00:14:53.521 "block_size": 4096, 00:14:53.521 "num_blocks": 8192, 00:14:53.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.521 "md_size": 32, 00:14:53.521 "md_interleave": false, 00:14:53.521 "dif_type": 0, 00:14:53.521 "assigned_rate_limits": { 00:14:53.521 "rw_ios_per_sec": 0, 00:14:53.521 "rw_mbytes_per_sec": 0, 00:14:53.521 "r_mbytes_per_sec": 0, 00:14:53.521 "w_mbytes_per_sec": 0 00:14:53.521 }, 00:14:53.521 "claimed": true, 00:14:53.521 "claim_type": "exclusive_write", 00:14:53.521 "zoned": false, 00:14:53.521 "supported_io_types": { 00:14:53.521 "read": true, 00:14:53.521 "write": true, 00:14:53.521 "unmap": true, 00:14:53.521 "flush": true, 00:14:53.521 "reset": true, 00:14:53.521 "nvme_admin": false, 00:14:53.521 "nvme_io": false, 00:14:53.521 "nvme_io_md": false, 00:14:53.521 "write_zeroes": true, 00:14:53.521 "zcopy": true, 00:14:53.521 "get_zone_info": false, 00:14:53.521 "zone_management": false, 00:14:53.521 "zone_append": false, 00:14:53.521 "compare": false, 00:14:53.521 "compare_and_write": false, 00:14:53.521 "abort": true, 00:14:53.521 "seek_hole": false, 00:14:53.521 "seek_data": false, 00:14:53.521 "copy": true, 00:14:53.521 "nvme_iov_md": false 00:14:53.522 }, 00:14:53.522 "memory_domains": [ 00:14:53.522 { 00:14:53.522 "dma_device_id": "system", 00:14:53.522 "dma_device_type": 1 00:14:53.522 }, 00:14:53.522 { 00:14:53.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.522 "dma_device_type": 2 00:14:53.522 } 00:14:53.522 ], 00:14:53.522 "driver_specific": { 00:14:53.522 "passthru": { 00:14:53.522 "name": "pt1", 00:14:53.522 "base_bdev_name": "malloc1" 00:14:53.522 } 00:14:53.522 } 00:14:53.522 }' 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:53.522 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.781 "name": "pt2", 00:14:53.781 "aliases": [ 00:14:53.781 "00000000-0000-0000-0000-000000000002" 00:14:53.781 ], 00:14:53.781 "product_name": "passthru", 00:14:53.781 "block_size": 4096, 00:14:53.781 "num_blocks": 8192, 00:14:53.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.781 "md_size": 32, 00:14:53.781 "md_interleave": false, 00:14:53.781 "dif_type": 0, 00:14:53.781 "assigned_rate_limits": { 00:14:53.781 "rw_ios_per_sec": 0, 00:14:53.781 "rw_mbytes_per_sec": 0, 00:14:53.781 "r_mbytes_per_sec": 0, 00:14:53.781 "w_mbytes_per_sec": 0 00:14:53.781 }, 00:14:53.781 "claimed": true, 00:14:53.781 "claim_type": "exclusive_write", 00:14:53.781 "zoned": false, 00:14:53.781 "supported_io_types": { 00:14:53.781 "read": true, 00:14:53.781 "write": true, 00:14:53.781 "unmap": true, 00:14:53.781 "flush": true, 00:14:53.781 "reset": true, 00:14:53.781 "nvme_admin": false, 00:14:53.781 "nvme_io": false, 00:14:53.781 "nvme_io_md": false, 00:14:53.781 "write_zeroes": true, 00:14:53.781 "zcopy": true, 00:14:53.781 "get_zone_info": false, 00:14:53.781 "zone_management": false, 00:14:53.781 "zone_append": false, 00:14:53.781 "compare": false, 00:14:53.781 "compare_and_write": false, 00:14:53.781 "abort": true, 00:14:53.781 "seek_hole": false, 00:14:53.781 "seek_data": false, 00:14:53.781 "copy": true, 00:14:53.781 "nvme_iov_md": false 00:14:53.781 }, 00:14:53.781 "memory_domains": [ 00:14:53.781 { 00:14:53.781 "dma_device_id": "system", 00:14:53.781 "dma_device_type": 1 00:14:53.781 }, 00:14:53.781 { 00:14:53.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.781 "dma_device_type": 2 00:14:53.781 } 00:14:53.781 ], 00:14:53.781 "driver_specific": { 00:14:53.781 "passthru": { 00:14:53.781 "name": "pt2", 00:14:53.781 "base_bdev_name": "malloc2" 00:14:53.781 } 00:14:53.781 } 00:14:53.781 }' 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.781 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:54.040 [2024-07-25 02:39:40.908695] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 21a03afa-4a2f-11ef-9c8e-7947904e2597 '!=' 21a03afa-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:14:54.040 02:39:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:54.299 [2024-07-25 02:39:41.092696] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.299 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.558 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.558 "name": "raid_bdev1", 00:14:54.558 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:54.558 "strip_size_kb": 0, 00:14:54.558 "state": "online", 00:14:54.558 "raid_level": "raid1", 00:14:54.558 "superblock": true, 00:14:54.558 "num_base_bdevs": 2, 00:14:54.558 "num_base_bdevs_discovered": 1, 00:14:54.558 "num_base_bdevs_operational": 1, 00:14:54.558 "base_bdevs_list": [ 00:14:54.559 { 00:14:54.559 "name": null, 00:14:54.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.559 "is_configured": false, 00:14:54.559 "data_offset": 256, 00:14:54.559 "data_size": 7936 00:14:54.559 }, 00:14:54.559 { 00:14:54.559 "name": "pt2", 00:14:54.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.559 "is_configured": true, 00:14:54.559 "data_offset": 256, 00:14:54.559 "data_size": 7936 00:14:54.559 } 00:14:54.559 ] 00:14:54.559 }' 00:14:54.559 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.559 02:39:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:54.817 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.076 [2024-07-25 02:39:41.764749] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.076 [2024-07-25 02:39:41.764762] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.076 [2024-07-25 02:39:41.764772] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.076 [2024-07-25 02:39:41.764778] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.076 [2024-07-25 02:39:41.764782] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba35180 name raid_bdev1, state offline 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:55.076 02:39:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:14:55.336 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:55.595 [2024-07-25 02:39:42.308804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:55.595 [2024-07-25 02:39:42.308837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.595 [2024-07-25 02:39:42.308844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34f00 00:14:55.595 [2024-07-25 02:39:42.308849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.595 [2024-07-25 02:39:42.309320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.595 [2024-07-25 02:39:42.309344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:55.595 [2024-07-25 02:39:42.309360] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:55.595 [2024-07-25 02:39:42.309368] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.595 [2024-07-25 02:39:42.309378] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f670ba35180 00:14:55.595 [2024-07-25 02:39:42.309381] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:55.595 [2024-07-25 02:39:42.309397] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f670ba97e20 00:14:55.595 [2024-07-25 02:39:42.309415] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f670ba35180 00:14:55.595 [2024-07-25 02:39:42.309418] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f670ba35180 00:14:55.595 [2024-07-25 02:39:42.309427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.595 pt2 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.595 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.854 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.855 "name": "raid_bdev1", 00:14:55.855 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:55.855 "strip_size_kb": 0, 00:14:55.855 "state": "online", 00:14:55.855 "raid_level": "raid1", 00:14:55.855 "superblock": true, 00:14:55.855 "num_base_bdevs": 2, 00:14:55.855 "num_base_bdevs_discovered": 1, 00:14:55.855 "num_base_bdevs_operational": 1, 00:14:55.855 "base_bdevs_list": [ 00:14:55.855 { 00:14:55.855 "name": null, 00:14:55.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.855 "is_configured": false, 00:14:55.855 "data_offset": 256, 00:14:55.855 "data_size": 7936 00:14:55.855 }, 00:14:55.855 { 00:14:55.855 "name": "pt2", 00:14:55.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.855 "is_configured": true, 00:14:55.855 "data_offset": 256, 00:14:55.855 "data_size": 7936 00:14:55.855 } 00:14:55.855 ] 00:14:55.855 }' 00:14:55.855 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.855 02:39:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:56.113 02:39:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:56.113 [2024-07-25 02:39:42.976858] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.113 [2024-07-25 02:39:42.976868] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.113 [2024-07-25 02:39:42.976880] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.114 [2024-07-25 02:39:42.976888] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.114 [2024-07-25 02:39:42.976891] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba35180 name raid_bdev1, state offline 00:14:56.114 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.114 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:56.372 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:56.372 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:56.372 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:14:56.372 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.631 [2024-07-25 02:39:43.376907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.631 [2024-07-25 02:39:43.376944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.631 [2024-07-25 02:39:43.376952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f670ba34c80 00:14:56.631 [2024-07-25 02:39:43.376957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.631 [2024-07-25 02:39:43.377412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.631 [2024-07-25 02:39:43.377435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.631 [2024-07-25 02:39:43.377453] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:56.631 [2024-07-25 02:39:43.377477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.631 [2024-07-25 02:39:43.377491] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:56.631 [2024-07-25 02:39:43.377495] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:56.631 [2024-07-25 02:39:43.377498] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba34780 name raid_bdev1, state configuring 00:14:56.631 [2024-07-25 02:39:43.377504] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.631 [2024-07-25 02:39:43.377515] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f670ba34780 00:14:56.631 [2024-07-25 02:39:43.377518] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:56.631 [2024-07-25 02:39:43.377534] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f670ba97e20 00:14:56.631 [2024-07-25 02:39:43.377552] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f670ba34780 00:14:56.631 [2024-07-25 02:39:43.377556] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f670ba34780 00:14:56.631 [2024-07-25 02:39:43.377566] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.631 pt1 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.631 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.889 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.889 "name": "raid_bdev1", 00:14:56.889 "uuid": "21a03afa-4a2f-11ef-9c8e-7947904e2597", 00:14:56.889 "strip_size_kb": 0, 00:14:56.889 "state": "online", 00:14:56.889 "raid_level": "raid1", 00:14:56.889 "superblock": true, 00:14:56.889 "num_base_bdevs": 2, 00:14:56.889 "num_base_bdevs_discovered": 1, 00:14:56.889 "num_base_bdevs_operational": 1, 00:14:56.889 "base_bdevs_list": [ 00:14:56.889 { 00:14:56.889 "name": null, 00:14:56.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.889 "is_configured": false, 00:14:56.889 "data_offset": 256, 00:14:56.889 "data_size": 7936 00:14:56.889 }, 00:14:56.889 { 00:14:56.889 "name": "pt2", 00:14:56.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.889 "is_configured": true, 00:14:56.889 "data_offset": 256, 00:14:56.889 "data_size": 7936 00:14:56.889 } 00:14:56.889 ] 00:14:56.889 }' 00:14:56.889 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.889 02:39:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:57.147 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:57.147 02:39:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:57.147 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:57.147 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:57.147 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:57.407 [2024-07-25 02:39:44.204990] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 21a03afa-4a2f-11ef-9c8e-7947904e2597 '!=' 21a03afa-4a2f-11ef-9c8e-7947904e2597 ']' 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 65907 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 65907 ']' 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 65907 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 65907 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:57.407 killing process with pid 65907 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65907' 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 65907 00:14:57.407 [2024-07-25 02:39:44.250348] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.407 [2024-07-25 02:39:44.250362] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.407 [2024-07-25 02:39:44.250379] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.407 [2024-07-25 02:39:44.250394] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f670ba34780 name raid_bdev1, state offline 00:14:57.407 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 65907 00:14:57.407 [2024-07-25 02:39:44.259941] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.667 02:39:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:14:57.667 00:14:57.667 real 0m10.946s 00:14:57.667 user 0m18.247s 00:14:57.667 sys 0m2.323s 00:14:57.667 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.667 02:39:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:14:57.667 ************************************ 00:14:57.667 END TEST raid_superblock_test_md_separate 00:14:57.667 ************************************ 00:14:57.667 02:39:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:57.667 02:39:44 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:14:57.667 02:39:44 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:14:57.667 02:39:44 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:14:57.667 02:39:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:57.667 02:39:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.667 02:39:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.667 ************************************ 00:14:57.667 START TEST raid_state_function_test_sb_md_interleaved 00:14:57.667 ************************************ 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:57.667 Process raid pid: 66292 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66292 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66292' 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66292 /var/tmp/spdk-raid.sock 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66292 ']' 00:14:57.667 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.668 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.668 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.668 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.668 02:39:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:14:57.668 [2024-07-25 02:39:44.520800] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:14:57.668 [2024-07-25 02:39:44.521175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:58.607 EAL: TSC is not safe to use in SMP mode 00:14:58.607 EAL: TSC is not invariant 00:14:58.607 [2024-07-25 02:39:45.398154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.607 [2024-07-25 02:39:45.476846] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:58.607 [2024-07-25 02:39:45.478185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.607 [2024-07-25 02:39:45.478718] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.607 [2024-07-25 02:39:45.478730] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.549 [2024-07-25 02:39:46.269610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.549 [2024-07-25 02:39:46.269651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.549 [2024-07-25 02:39:46.269655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.549 [2024-07-25 02:39:46.269660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.549 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.810 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.810 "name": "Existed_Raid", 00:14:59.810 "uuid": "27c1a878-4a2f-11ef-9c8e-7947904e2597", 00:14:59.810 "strip_size_kb": 0, 00:14:59.810 "state": "configuring", 00:14:59.810 "raid_level": "raid1", 00:14:59.810 "superblock": true, 00:14:59.810 "num_base_bdevs": 2, 00:14:59.810 "num_base_bdevs_discovered": 0, 00:14:59.810 "num_base_bdevs_operational": 2, 00:14:59.810 "base_bdevs_list": [ 00:14:59.810 { 00:14:59.810 "name": "BaseBdev1", 00:14:59.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.810 "is_configured": false, 00:14:59.810 "data_offset": 0, 00:14:59.810 "data_size": 0 00:14:59.810 }, 00:14:59.810 { 00:14:59.810 "name": "BaseBdev2", 00:14:59.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.810 "is_configured": false, 00:14:59.810 "data_offset": 0, 00:14:59.810 "data_size": 0 00:14:59.810 } 00:14:59.810 ] 00:14:59.810 }' 00:14:59.810 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.810 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:00.069 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.069 [2024-07-25 02:39:46.933665] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.069 [2024-07-25 02:39:46.933676] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8985434500 name Existed_Raid, state configuring 00:15:00.069 02:39:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.327 [2024-07-25 02:39:47.129691] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.327 [2024-07-25 02:39:47.129717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.327 [2024-07-25 02:39:47.129720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.327 [2024-07-25 02:39:47.129726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.327 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:00.586 [2024-07-25 02:39:47.338397] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.586 BaseBdev1 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.586 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.845 [ 00:15:00.845 { 00:15:00.845 "name": "BaseBdev1", 00:15:00.845 "aliases": [ 00:15:00.845 "2864a3c1-4a2f-11ef-9c8e-7947904e2597" 00:15:00.845 ], 00:15:00.845 "product_name": "Malloc disk", 00:15:00.845 "block_size": 4128, 00:15:00.845 "num_blocks": 8192, 00:15:00.845 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:00.845 "md_size": 32, 00:15:00.845 "md_interleave": true, 00:15:00.845 "dif_type": 0, 00:15:00.845 "assigned_rate_limits": { 00:15:00.845 "rw_ios_per_sec": 0, 00:15:00.845 "rw_mbytes_per_sec": 0, 00:15:00.845 "r_mbytes_per_sec": 0, 00:15:00.845 "w_mbytes_per_sec": 0 00:15:00.845 }, 00:15:00.845 "claimed": true, 00:15:00.845 "claim_type": "exclusive_write", 00:15:00.845 "zoned": false, 00:15:00.845 "supported_io_types": { 00:15:00.845 "read": true, 00:15:00.845 "write": true, 00:15:00.845 "unmap": true, 00:15:00.845 "flush": true, 00:15:00.845 "reset": true, 00:15:00.845 "nvme_admin": false, 00:15:00.845 "nvme_io": false, 00:15:00.845 "nvme_io_md": false, 00:15:00.845 "write_zeroes": true, 00:15:00.845 "zcopy": true, 00:15:00.845 "get_zone_info": false, 00:15:00.845 "zone_management": false, 00:15:00.845 "zone_append": false, 00:15:00.845 "compare": false, 00:15:00.845 "compare_and_write": false, 00:15:00.845 "abort": true, 00:15:00.845 "seek_hole": false, 00:15:00.845 "seek_data": false, 00:15:00.845 "copy": true, 00:15:00.845 "nvme_iov_md": false 00:15:00.845 }, 00:15:00.845 "memory_domains": [ 00:15:00.845 { 00:15:00.845 "dma_device_id": "system", 00:15:00.845 "dma_device_type": 1 00:15:00.845 }, 00:15:00.845 { 00:15:00.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.845 "dma_device_type": 2 00:15:00.845 } 00:15:00.845 ], 00:15:00.845 "driver_specific": {} 00:15:00.845 } 00:15:00.845 ] 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.845 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.104 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.104 "name": "Existed_Raid", 00:15:01.104 "uuid": "2844e571-4a2f-11ef-9c8e-7947904e2597", 00:15:01.104 "strip_size_kb": 0, 00:15:01.104 "state": "configuring", 00:15:01.104 "raid_level": "raid1", 00:15:01.104 "superblock": true, 00:15:01.104 "num_base_bdevs": 2, 00:15:01.104 "num_base_bdevs_discovered": 1, 00:15:01.104 "num_base_bdevs_operational": 2, 00:15:01.104 "base_bdevs_list": [ 00:15:01.104 { 00:15:01.104 "name": "BaseBdev1", 00:15:01.104 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:01.104 "is_configured": true, 00:15:01.104 "data_offset": 256, 00:15:01.104 "data_size": 7936 00:15:01.104 }, 00:15:01.104 { 00:15:01.104 "name": "BaseBdev2", 00:15:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.104 "is_configured": false, 00:15:01.104 "data_offset": 0, 00:15:01.104 "data_size": 0 00:15:01.104 } 00:15:01.104 ] 00:15:01.104 }' 00:15:01.104 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.104 02:39:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:01.363 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.622 [2024-07-25 02:39:48.361795] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.622 [2024-07-25 02:39:48.361814] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8985434500 name Existed_Raid, state configuring 00:15:01.622 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.881 [2024-07-25 02:39:48.541824] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.881 [2024-07-25 02:39:48.542458] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.881 [2024-07-25 02:39:48.542492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.881 "name": "Existed_Raid", 00:15:01.881 "uuid": "291c5eca-4a2f-11ef-9c8e-7947904e2597", 00:15:01.881 "strip_size_kb": 0, 00:15:01.881 "state": "configuring", 00:15:01.881 "raid_level": "raid1", 00:15:01.881 "superblock": true, 00:15:01.881 "num_base_bdevs": 2, 00:15:01.881 "num_base_bdevs_discovered": 1, 00:15:01.881 "num_base_bdevs_operational": 2, 00:15:01.881 "base_bdevs_list": [ 00:15:01.881 { 00:15:01.881 "name": "BaseBdev1", 00:15:01.881 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:01.881 "is_configured": true, 00:15:01.881 "data_offset": 256, 00:15:01.881 "data_size": 7936 00:15:01.881 }, 00:15:01.881 { 00:15:01.881 "name": "BaseBdev2", 00:15:01.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.881 "is_configured": false, 00:15:01.881 "data_offset": 0, 00:15:01.881 "data_size": 0 00:15:01.881 } 00:15:01.881 ] 00:15:01.881 }' 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.881 02:39:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:02.141 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:02.400 [2024-07-25 02:39:49.209925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.400 [2024-07-25 02:39:49.209975] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a8985434a00 00:15:02.400 [2024-07-25 02:39:49.209979] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:02.400 [2024-07-25 02:39:49.209994] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a8985497e20 00:15:02.400 [2024-07-25 02:39:49.210004] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a8985434a00 00:15:02.400 [2024-07-25 02:39:49.210007] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2a8985434a00 00:15:02.400 [2024-07-25 02:39:49.210015] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.400 BaseBdev2 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.400 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.659 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.918 [ 00:15:02.918 { 00:15:02.918 "name": "BaseBdev2", 00:15:02.918 "aliases": [ 00:15:02.918 "29824ed8-4a2f-11ef-9c8e-7947904e2597" 00:15:02.918 ], 00:15:02.918 "product_name": "Malloc disk", 00:15:02.918 "block_size": 4128, 00:15:02.918 "num_blocks": 8192, 00:15:02.918 "uuid": "29824ed8-4a2f-11ef-9c8e-7947904e2597", 00:15:02.918 "md_size": 32, 00:15:02.918 "md_interleave": true, 00:15:02.918 "dif_type": 0, 00:15:02.918 "assigned_rate_limits": { 00:15:02.918 "rw_ios_per_sec": 0, 00:15:02.918 "rw_mbytes_per_sec": 0, 00:15:02.918 "r_mbytes_per_sec": 0, 00:15:02.918 "w_mbytes_per_sec": 0 00:15:02.918 }, 00:15:02.918 "claimed": true, 00:15:02.918 "claim_type": "exclusive_write", 00:15:02.918 "zoned": false, 00:15:02.918 "supported_io_types": { 00:15:02.918 "read": true, 00:15:02.918 "write": true, 00:15:02.918 "unmap": true, 00:15:02.918 "flush": true, 00:15:02.918 "reset": true, 00:15:02.919 "nvme_admin": false, 00:15:02.919 "nvme_io": false, 00:15:02.919 "nvme_io_md": false, 00:15:02.919 "write_zeroes": true, 00:15:02.919 "zcopy": true, 00:15:02.919 "get_zone_info": false, 00:15:02.919 "zone_management": false, 00:15:02.919 "zone_append": false, 00:15:02.919 "compare": false, 00:15:02.919 "compare_and_write": false, 00:15:02.919 "abort": true, 00:15:02.919 "seek_hole": false, 00:15:02.919 "seek_data": false, 00:15:02.919 "copy": true, 00:15:02.919 "nvme_iov_md": false 00:15:02.919 }, 00:15:02.919 "memory_domains": [ 00:15:02.919 { 00:15:02.919 "dma_device_id": "system", 00:15:02.919 "dma_device_type": 1 00:15:02.919 }, 00:15:02.919 { 00:15:02.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.919 "dma_device_type": 2 00:15:02.919 } 00:15:02.919 ], 00:15:02.919 "driver_specific": {} 00:15:02.919 } 00:15:02.919 ] 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.919 "name": "Existed_Raid", 00:15:02.919 "uuid": "291c5eca-4a2f-11ef-9c8e-7947904e2597", 00:15:02.919 "strip_size_kb": 0, 00:15:02.919 "state": "online", 00:15:02.919 "raid_level": "raid1", 00:15:02.919 "superblock": true, 00:15:02.919 "num_base_bdevs": 2, 00:15:02.919 "num_base_bdevs_discovered": 2, 00:15:02.919 "num_base_bdevs_operational": 2, 00:15:02.919 "base_bdevs_list": [ 00:15:02.919 { 00:15:02.919 "name": "BaseBdev1", 00:15:02.919 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:02.919 "is_configured": true, 00:15:02.919 "data_offset": 256, 00:15:02.919 "data_size": 7936 00:15:02.919 }, 00:15:02.919 { 00:15:02.919 "name": "BaseBdev2", 00:15:02.919 "uuid": "29824ed8-4a2f-11ef-9c8e-7947904e2597", 00:15:02.919 "is_configured": true, 00:15:02.919 "data_offset": 256, 00:15:02.919 "data_size": 7936 00:15:02.919 } 00:15:02.919 ] 00:15:02.919 }' 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.919 02:39:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:03.179 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.440 [2024-07-25 02:39:50.217988] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.440 "name": "Existed_Raid", 00:15:03.440 "aliases": [ 00:15:03.440 "291c5eca-4a2f-11ef-9c8e-7947904e2597" 00:15:03.440 ], 00:15:03.440 "product_name": "Raid Volume", 00:15:03.440 "block_size": 4128, 00:15:03.440 "num_blocks": 7936, 00:15:03.440 "uuid": "291c5eca-4a2f-11ef-9c8e-7947904e2597", 00:15:03.440 "md_size": 32, 00:15:03.440 "md_interleave": true, 00:15:03.440 "dif_type": 0, 00:15:03.440 "assigned_rate_limits": { 00:15:03.440 "rw_ios_per_sec": 0, 00:15:03.440 "rw_mbytes_per_sec": 0, 00:15:03.440 "r_mbytes_per_sec": 0, 00:15:03.440 "w_mbytes_per_sec": 0 00:15:03.440 }, 00:15:03.440 "claimed": false, 00:15:03.440 "zoned": false, 00:15:03.440 "supported_io_types": { 00:15:03.440 "read": true, 00:15:03.440 "write": true, 00:15:03.440 "unmap": false, 00:15:03.440 "flush": false, 00:15:03.440 "reset": true, 00:15:03.440 "nvme_admin": false, 00:15:03.440 "nvme_io": false, 00:15:03.440 "nvme_io_md": false, 00:15:03.440 "write_zeroes": true, 00:15:03.440 "zcopy": false, 00:15:03.440 "get_zone_info": false, 00:15:03.440 "zone_management": false, 00:15:03.440 "zone_append": false, 00:15:03.440 "compare": false, 00:15:03.440 "compare_and_write": false, 00:15:03.440 "abort": false, 00:15:03.440 "seek_hole": false, 00:15:03.440 "seek_data": false, 00:15:03.440 "copy": false, 00:15:03.440 "nvme_iov_md": false 00:15:03.440 }, 00:15:03.440 "memory_domains": [ 00:15:03.440 { 00:15:03.440 "dma_device_id": "system", 00:15:03.440 "dma_device_type": 1 00:15:03.440 }, 00:15:03.440 { 00:15:03.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.440 "dma_device_type": 2 00:15:03.440 }, 00:15:03.440 { 00:15:03.440 "dma_device_id": "system", 00:15:03.440 "dma_device_type": 1 00:15:03.440 }, 00:15:03.440 { 00:15:03.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.440 "dma_device_type": 2 00:15:03.440 } 00:15:03.440 ], 00:15:03.440 "driver_specific": { 00:15:03.440 "raid": { 00:15:03.440 "uuid": "291c5eca-4a2f-11ef-9c8e-7947904e2597", 00:15:03.440 "strip_size_kb": 0, 00:15:03.440 "state": "online", 00:15:03.440 "raid_level": "raid1", 00:15:03.440 "superblock": true, 00:15:03.440 "num_base_bdevs": 2, 00:15:03.440 "num_base_bdevs_discovered": 2, 00:15:03.440 "num_base_bdevs_operational": 2, 00:15:03.440 "base_bdevs_list": [ 00:15:03.440 { 00:15:03.440 "name": "BaseBdev1", 00:15:03.440 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:03.440 "is_configured": true, 00:15:03.440 "data_offset": 256, 00:15:03.440 "data_size": 7936 00:15:03.440 }, 00:15:03.440 { 00:15:03.440 "name": "BaseBdev2", 00:15:03.440 "uuid": "29824ed8-4a2f-11ef-9c8e-7947904e2597", 00:15:03.440 "is_configured": true, 00:15:03.440 "data_offset": 256, 00:15:03.440 "data_size": 7936 00:15:03.440 } 00:15:03.440 ] 00:15:03.440 } 00:15:03.440 } 00:15:03.440 }' 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:03.440 BaseBdev2' 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:03.440 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.700 "name": "BaseBdev1", 00:15:03.700 "aliases": [ 00:15:03.700 "2864a3c1-4a2f-11ef-9c8e-7947904e2597" 00:15:03.700 ], 00:15:03.700 "product_name": "Malloc disk", 00:15:03.700 "block_size": 4128, 00:15:03.700 "num_blocks": 8192, 00:15:03.700 "uuid": "2864a3c1-4a2f-11ef-9c8e-7947904e2597", 00:15:03.700 "md_size": 32, 00:15:03.700 "md_interleave": true, 00:15:03.700 "dif_type": 0, 00:15:03.700 "assigned_rate_limits": { 00:15:03.700 "rw_ios_per_sec": 0, 00:15:03.700 "rw_mbytes_per_sec": 0, 00:15:03.700 "r_mbytes_per_sec": 0, 00:15:03.700 "w_mbytes_per_sec": 0 00:15:03.700 }, 00:15:03.700 "claimed": true, 00:15:03.700 "claim_type": "exclusive_write", 00:15:03.700 "zoned": false, 00:15:03.700 "supported_io_types": { 00:15:03.700 "read": true, 00:15:03.700 "write": true, 00:15:03.700 "unmap": true, 00:15:03.700 "flush": true, 00:15:03.700 "reset": true, 00:15:03.700 "nvme_admin": false, 00:15:03.700 "nvme_io": false, 00:15:03.700 "nvme_io_md": false, 00:15:03.700 "write_zeroes": true, 00:15:03.700 "zcopy": true, 00:15:03.700 "get_zone_info": false, 00:15:03.700 "zone_management": false, 00:15:03.700 "zone_append": false, 00:15:03.700 "compare": false, 00:15:03.700 "compare_and_write": false, 00:15:03.700 "abort": true, 00:15:03.700 "seek_hole": false, 00:15:03.700 "seek_data": false, 00:15:03.700 "copy": true, 00:15:03.700 "nvme_iov_md": false 00:15:03.700 }, 00:15:03.700 "memory_domains": [ 00:15:03.700 { 00:15:03.700 "dma_device_id": "system", 00:15:03.700 "dma_device_type": 1 00:15:03.700 }, 00:15:03.700 { 00:15:03.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.700 "dma_device_type": 2 00:15:03.700 } 00:15:03.700 ], 00:15:03.700 "driver_specific": {} 00:15:03.700 }' 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:03.700 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.701 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:03.701 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.961 "name": "BaseBdev2", 00:15:03.961 "aliases": [ 00:15:03.961 "29824ed8-4a2f-11ef-9c8e-7947904e2597" 00:15:03.961 ], 00:15:03.961 "product_name": "Malloc disk", 00:15:03.961 "block_size": 4128, 00:15:03.961 "num_blocks": 8192, 00:15:03.961 "uuid": "29824ed8-4a2f-11ef-9c8e-7947904e2597", 00:15:03.961 "md_size": 32, 00:15:03.961 "md_interleave": true, 00:15:03.961 "dif_type": 0, 00:15:03.961 "assigned_rate_limits": { 00:15:03.961 "rw_ios_per_sec": 0, 00:15:03.961 "rw_mbytes_per_sec": 0, 00:15:03.961 "r_mbytes_per_sec": 0, 00:15:03.961 "w_mbytes_per_sec": 0 00:15:03.961 }, 00:15:03.961 "claimed": true, 00:15:03.961 "claim_type": "exclusive_write", 00:15:03.961 "zoned": false, 00:15:03.961 "supported_io_types": { 00:15:03.961 "read": true, 00:15:03.961 "write": true, 00:15:03.961 "unmap": true, 00:15:03.961 "flush": true, 00:15:03.961 "reset": true, 00:15:03.961 "nvme_admin": false, 00:15:03.961 "nvme_io": false, 00:15:03.961 "nvme_io_md": false, 00:15:03.961 "write_zeroes": true, 00:15:03.961 "zcopy": true, 00:15:03.961 "get_zone_info": false, 00:15:03.961 "zone_management": false, 00:15:03.961 "zone_append": false, 00:15:03.961 "compare": false, 00:15:03.961 "compare_and_write": false, 00:15:03.961 "abort": true, 00:15:03.961 "seek_hole": false, 00:15:03.961 "seek_data": false, 00:15:03.961 "copy": true, 00:15:03.961 "nvme_iov_md": false 00:15:03.961 }, 00:15:03.961 "memory_domains": [ 00:15:03.961 { 00:15:03.961 "dma_device_id": "system", 00:15:03.961 "dma_device_type": 1 00:15:03.961 }, 00:15:03.961 { 00:15:03.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.961 "dma_device_type": 2 00:15:03.961 } 00:15:03.961 ], 00:15:03.961 "driver_specific": {} 00:15:03.961 }' 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:03.961 02:39:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:04.222 [2024-07-25 02:39:50.986038] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.222 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.482 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.482 "name": "Existed_Raid", 00:15:04.482 "uuid": "291c5eca-4a2f-11ef-9c8e-7947904e2597", 00:15:04.482 "strip_size_kb": 0, 00:15:04.482 "state": "online", 00:15:04.482 "raid_level": "raid1", 00:15:04.482 "superblock": true, 00:15:04.482 "num_base_bdevs": 2, 00:15:04.482 "num_base_bdevs_discovered": 1, 00:15:04.482 "num_base_bdevs_operational": 1, 00:15:04.482 "base_bdevs_list": [ 00:15:04.482 { 00:15:04.482 "name": null, 00:15:04.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.482 "is_configured": false, 00:15:04.482 "data_offset": 256, 00:15:04.482 "data_size": 7936 00:15:04.482 }, 00:15:04.482 { 00:15:04.482 "name": "BaseBdev2", 00:15:04.482 "uuid": "29824ed8-4a2f-11ef-9c8e-7947904e2597", 00:15:04.482 "is_configured": true, 00:15:04.482 "data_offset": 256, 00:15:04.482 "data_size": 7936 00:15:04.482 } 00:15:04.482 ] 00:15:04.482 }' 00:15:04.482 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.482 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:04.742 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:04.742 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.742 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.742 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:05.002 [2024-07-25 02:39:51.806892] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.002 [2024-07-25 02:39:51.806924] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.002 [2024-07-25 02:39:51.811735] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.002 [2024-07-25 02:39:51.811747] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.002 [2024-07-25 02:39:51.811750] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8985434a00 name Existed_Raid, state offline 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.002 02:39:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66292 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66292 ']' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66292 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66292 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:05.262 killing process with pid 66292 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66292' 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66292 00:15:05.262 [2024-07-25 02:39:52.068288] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.262 [2024-07-25 02:39:52.068321] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.262 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66292 00:15:05.523 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:15:05.523 00:15:05.523 real 0m7.739s 00:15:05.523 user 0m12.217s 00:15:05.523 sys 0m1.899s 00:15:05.523 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.523 02:39:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.523 ************************************ 00:15:05.523 END TEST raid_state_function_test_sb_md_interleaved 00:15:05.523 ************************************ 00:15:05.523 02:39:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:05.523 02:39:52 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:05.523 02:39:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:05.523 02:39:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.523 02:39:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.523 ************************************ 00:15:05.523 START TEST raid_superblock_test_md_interleaved 00:15:05.523 ************************************ 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=66560 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 66560 /var/tmp/spdk-raid.sock 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66560 ']' 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.523 02:39:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:05.523 [2024-07-25 02:39:52.329454] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:05.523 [2024-07-25 02:39:52.329719] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:06.093 EAL: TSC is not safe to use in SMP mode 00:15:06.093 EAL: TSC is not invariant 00:15:06.093 [2024-07-25 02:39:52.767029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.093 [2024-07-25 02:39:52.861900] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:06.093 [2024-07-25 02:39:52.863559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.093 [2024-07-25 02:39:52.864101] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.093 [2024-07-25 02:39:52.864112] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.353 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:06.620 malloc1 00:15:06.620 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.910 [2024-07-25 02:39:53.587030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.910 [2024-07-25 02:39:53.587062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.910 [2024-07-25 02:39:53.587070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834780 00:15:06.910 [2024-07-25 02:39:53.587075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.910 [2024-07-25 02:39:53.587549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.910 [2024-07-25 02:39:53.587576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.910 pt1 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.910 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:07.169 malloc2 00:15:07.170 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.170 [2024-07-25 02:39:53.971063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.170 [2024-07-25 02:39:53.971097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.170 [2024-07-25 02:39:53.971105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834c80 00:15:07.170 [2024-07-25 02:39:53.971110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.170 [2024-07-25 02:39:53.971467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.170 [2024-07-25 02:39:53.971603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.170 pt2 00:15:07.170 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:07.170 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:07.170 02:39:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:07.429 [2024-07-25 02:39:54.151081] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.429 [2024-07-25 02:39:54.151335] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.429 [2024-07-25 02:39:54.151379] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x347524834f00 00:15:07.429 [2024-07-25 02:39:54.151384] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:07.429 [2024-07-25 02:39:54.151411] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x347524897e20 00:15:07.429 [2024-07-25 02:39:54.151423] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x347524834f00 00:15:07.429 [2024-07-25 02:39:54.151425] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x347524834f00 00:15:07.429 [2024-07-25 02:39:54.151433] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.429 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.688 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.688 "name": "raid_bdev1", 00:15:07.688 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:07.688 "strip_size_kb": 0, 00:15:07.688 "state": "online", 00:15:07.688 "raid_level": "raid1", 00:15:07.688 "superblock": true, 00:15:07.688 "num_base_bdevs": 2, 00:15:07.688 "num_base_bdevs_discovered": 2, 00:15:07.688 "num_base_bdevs_operational": 2, 00:15:07.688 "base_bdevs_list": [ 00:15:07.688 { 00:15:07.688 "name": "pt1", 00:15:07.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.688 "is_configured": true, 00:15:07.689 "data_offset": 256, 00:15:07.689 "data_size": 7936 00:15:07.689 }, 00:15:07.689 { 00:15:07.689 "name": "pt2", 00:15:07.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.689 "is_configured": true, 00:15:07.689 "data_offset": 256, 00:15:07.689 "data_size": 7936 00:15:07.689 } 00:15:07.689 ] 00:15:07.689 }' 00:15:07.689 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.689 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.948 [2024-07-25 02:39:54.811170] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:07.948 "name": "raid_bdev1", 00:15:07.948 "aliases": [ 00:15:07.948 "2c74466f-4a2f-11ef-9c8e-7947904e2597" 00:15:07.948 ], 00:15:07.948 "product_name": "Raid Volume", 00:15:07.948 "block_size": 4128, 00:15:07.948 "num_blocks": 7936, 00:15:07.948 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:07.948 "md_size": 32, 00:15:07.948 "md_interleave": true, 00:15:07.948 "dif_type": 0, 00:15:07.948 "assigned_rate_limits": { 00:15:07.948 "rw_ios_per_sec": 0, 00:15:07.948 "rw_mbytes_per_sec": 0, 00:15:07.948 "r_mbytes_per_sec": 0, 00:15:07.948 "w_mbytes_per_sec": 0 00:15:07.948 }, 00:15:07.948 "claimed": false, 00:15:07.948 "zoned": false, 00:15:07.948 "supported_io_types": { 00:15:07.948 "read": true, 00:15:07.948 "write": true, 00:15:07.948 "unmap": false, 00:15:07.948 "flush": false, 00:15:07.948 "reset": true, 00:15:07.948 "nvme_admin": false, 00:15:07.948 "nvme_io": false, 00:15:07.948 "nvme_io_md": false, 00:15:07.948 "write_zeroes": true, 00:15:07.948 "zcopy": false, 00:15:07.948 "get_zone_info": false, 00:15:07.948 "zone_management": false, 00:15:07.948 "zone_append": false, 00:15:07.948 "compare": false, 00:15:07.948 "compare_and_write": false, 00:15:07.948 "abort": false, 00:15:07.948 "seek_hole": false, 00:15:07.948 "seek_data": false, 00:15:07.948 "copy": false, 00:15:07.948 "nvme_iov_md": false 00:15:07.948 }, 00:15:07.948 "memory_domains": [ 00:15:07.948 { 00:15:07.948 "dma_device_id": "system", 00:15:07.948 "dma_device_type": 1 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.948 "dma_device_type": 2 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "dma_device_id": "system", 00:15:07.948 "dma_device_type": 1 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.948 "dma_device_type": 2 00:15:07.948 } 00:15:07.948 ], 00:15:07.948 "driver_specific": { 00:15:07.948 "raid": { 00:15:07.948 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:07.948 "strip_size_kb": 0, 00:15:07.948 "state": "online", 00:15:07.948 "raid_level": "raid1", 00:15:07.948 "superblock": true, 00:15:07.948 "num_base_bdevs": 2, 00:15:07.948 "num_base_bdevs_discovered": 2, 00:15:07.948 "num_base_bdevs_operational": 2, 00:15:07.948 "base_bdevs_list": [ 00:15:07.948 { 00:15:07.948 "name": "pt1", 00:15:07.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.948 "is_configured": true, 00:15:07.948 "data_offset": 256, 00:15:07.948 "data_size": 7936 00:15:07.948 }, 00:15:07.948 { 00:15:07.948 "name": "pt2", 00:15:07.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.948 "is_configured": true, 00:15:07.948 "data_offset": 256, 00:15:07.948 "data_size": 7936 00:15:07.948 } 00:15:07.948 ] 00:15:07.948 } 00:15:07.948 } 00:15:07.948 }' 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:07.948 pt2' 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:07.948 02:39:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.208 "name": "pt1", 00:15:08.208 "aliases": [ 00:15:08.208 "00000000-0000-0000-0000-000000000001" 00:15:08.208 ], 00:15:08.208 "product_name": "passthru", 00:15:08.208 "block_size": 4128, 00:15:08.208 "num_blocks": 8192, 00:15:08.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.208 "md_size": 32, 00:15:08.208 "md_interleave": true, 00:15:08.208 "dif_type": 0, 00:15:08.208 "assigned_rate_limits": { 00:15:08.208 "rw_ios_per_sec": 0, 00:15:08.208 "rw_mbytes_per_sec": 0, 00:15:08.208 "r_mbytes_per_sec": 0, 00:15:08.208 "w_mbytes_per_sec": 0 00:15:08.208 }, 00:15:08.208 "claimed": true, 00:15:08.208 "claim_type": "exclusive_write", 00:15:08.208 "zoned": false, 00:15:08.208 "supported_io_types": { 00:15:08.208 "read": true, 00:15:08.208 "write": true, 00:15:08.208 "unmap": true, 00:15:08.208 "flush": true, 00:15:08.208 "reset": true, 00:15:08.208 "nvme_admin": false, 00:15:08.208 "nvme_io": false, 00:15:08.208 "nvme_io_md": false, 00:15:08.208 "write_zeroes": true, 00:15:08.208 "zcopy": true, 00:15:08.208 "get_zone_info": false, 00:15:08.208 "zone_management": false, 00:15:08.208 "zone_append": false, 00:15:08.208 "compare": false, 00:15:08.208 "compare_and_write": false, 00:15:08.208 "abort": true, 00:15:08.208 "seek_hole": false, 00:15:08.208 "seek_data": false, 00:15:08.208 "copy": true, 00:15:08.208 "nvme_iov_md": false 00:15:08.208 }, 00:15:08.208 "memory_domains": [ 00:15:08.208 { 00:15:08.208 "dma_device_id": "system", 00:15:08.208 "dma_device_type": 1 00:15:08.208 }, 00:15:08.208 { 00:15:08.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.208 "dma_device_type": 2 00:15:08.208 } 00:15:08.208 ], 00:15:08.208 "driver_specific": { 00:15:08.208 "passthru": { 00:15:08.208 "name": "pt1", 00:15:08.208 "base_bdev_name": "malloc1" 00:15:08.208 } 00:15:08.208 } 00:15:08.208 }' 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:08.208 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.468 "name": "pt2", 00:15:08.468 "aliases": [ 00:15:08.468 "00000000-0000-0000-0000-000000000002" 00:15:08.468 ], 00:15:08.468 "product_name": "passthru", 00:15:08.468 "block_size": 4128, 00:15:08.468 "num_blocks": 8192, 00:15:08.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.468 "md_size": 32, 00:15:08.468 "md_interleave": true, 00:15:08.468 "dif_type": 0, 00:15:08.468 "assigned_rate_limits": { 00:15:08.468 "rw_ios_per_sec": 0, 00:15:08.468 "rw_mbytes_per_sec": 0, 00:15:08.468 "r_mbytes_per_sec": 0, 00:15:08.468 "w_mbytes_per_sec": 0 00:15:08.468 }, 00:15:08.468 "claimed": true, 00:15:08.468 "claim_type": "exclusive_write", 00:15:08.468 "zoned": false, 00:15:08.468 "supported_io_types": { 00:15:08.468 "read": true, 00:15:08.468 "write": true, 00:15:08.468 "unmap": true, 00:15:08.468 "flush": true, 00:15:08.468 "reset": true, 00:15:08.468 "nvme_admin": false, 00:15:08.468 "nvme_io": false, 00:15:08.468 "nvme_io_md": false, 00:15:08.468 "write_zeroes": true, 00:15:08.468 "zcopy": true, 00:15:08.468 "get_zone_info": false, 00:15:08.468 "zone_management": false, 00:15:08.468 "zone_append": false, 00:15:08.468 "compare": false, 00:15:08.468 "compare_and_write": false, 00:15:08.468 "abort": true, 00:15:08.468 "seek_hole": false, 00:15:08.468 "seek_data": false, 00:15:08.468 "copy": true, 00:15:08.468 "nvme_iov_md": false 00:15:08.468 }, 00:15:08.468 "memory_domains": [ 00:15:08.468 { 00:15:08.468 "dma_device_id": "system", 00:15:08.468 "dma_device_type": 1 00:15:08.468 }, 00:15:08.468 { 00:15:08.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.468 "dma_device_type": 2 00:15:08.468 } 00:15:08.468 ], 00:15:08.468 "driver_specific": { 00:15:08.468 "passthru": { 00:15:08.468 "name": "pt2", 00:15:08.468 "base_bdev_name": "malloc2" 00:15:08.468 } 00:15:08.468 } 00:15:08.468 }' 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:08.468 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:08.728 [2024-07-25 02:39:55.595234] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2c74466f-4a2f-11ef-9c8e-7947904e2597 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 2c74466f-4a2f-11ef-9c8e-7947904e2597 ']' 00:15:08.728 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:08.987 [2024-07-25 02:39:55.791231] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.987 [2024-07-25 02:39:55.791246] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.987 [2024-07-25 02:39:55.791260] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.987 [2024-07-25 02:39:55.791271] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.987 [2024-07-25 02:39:55.791274] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524834f00 name raid_bdev1, state offline 00:15:08.987 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.987 02:39:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:09.246 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:09.246 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:09.246 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.246 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:09.505 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.505 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:09.765 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:10.025 [2024-07-25 02:39:56.787340] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:10.025 [2024-07-25 02:39:56.787772] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:10.025 [2024-07-25 02:39:56.787793] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:10.025 [2024-07-25 02:39:56.787817] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:10.025 [2024-07-25 02:39:56.787825] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.025 [2024-07-25 02:39:56.787829] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524834c80 name raid_bdev1, state configuring 00:15:10.025 request: 00:15:10.025 { 00:15:10.025 "name": "raid_bdev1", 00:15:10.025 "raid_level": "raid1", 00:15:10.025 "base_bdevs": [ 00:15:10.025 "malloc1", 00:15:10.025 "malloc2" 00:15:10.025 ], 00:15:10.025 "superblock": false, 00:15:10.025 "method": "bdev_raid_create", 00:15:10.025 "req_id": 1 00:15:10.025 } 00:15:10.025 Got JSON-RPC error response 00:15:10.025 response: 00:15:10.025 { 00:15:10.025 "code": -17, 00:15:10.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:10.025 } 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:10.025 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.285 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:10.285 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:10.285 02:39:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.285 [2024-07-25 02:39:57.147373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.285 [2024-07-25 02:39:57.147408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.285 [2024-07-25 02:39:57.147415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834780 00:15:10.285 [2024-07-25 02:39:57.147420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.285 [2024-07-25 02:39:57.147867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.285 [2024-07-25 02:39:57.147892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.285 [2024-07-25 02:39:57.147922] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:10.285 [2024-07-25 02:39:57.147931] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.285 pt1 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.285 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.544 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.545 "name": "raid_bdev1", 00:15:10.545 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:10.545 "strip_size_kb": 0, 00:15:10.545 "state": "configuring", 00:15:10.545 "raid_level": "raid1", 00:15:10.545 "superblock": true, 00:15:10.545 "num_base_bdevs": 2, 00:15:10.545 "num_base_bdevs_discovered": 1, 00:15:10.545 "num_base_bdevs_operational": 2, 00:15:10.545 "base_bdevs_list": [ 00:15:10.545 { 00:15:10.545 "name": "pt1", 00:15:10.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.545 "is_configured": true, 00:15:10.545 "data_offset": 256, 00:15:10.545 "data_size": 7936 00:15:10.545 }, 00:15:10.545 { 00:15:10.545 "name": null, 00:15:10.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.545 "is_configured": false, 00:15:10.545 "data_offset": 256, 00:15:10.545 "data_size": 7936 00:15:10.545 } 00:15:10.545 ] 00:15:10.545 }' 00:15:10.545 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.545 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:10.804 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:10.804 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:10.804 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:10.804 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.064 [2024-07-25 02:39:57.827432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.064 [2024-07-25 02:39:57.827468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.064 [2024-07-25 02:39:57.827476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834f00 00:15:11.064 [2024-07-25 02:39:57.827481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.064 [2024-07-25 02:39:57.827525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.064 [2024-07-25 02:39:57.827531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.064 [2024-07-25 02:39:57.827542] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.064 [2024-07-25 02:39:57.827548] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.064 [2024-07-25 02:39:57.827564] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x347524835180 00:15:11.064 [2024-07-25 02:39:57.827567] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:11.064 [2024-07-25 02:39:57.827581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x347524897e20 00:15:11.064 [2024-07-25 02:39:57.827590] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x347524835180 00:15:11.064 [2024-07-25 02:39:57.827592] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x347524835180 00:15:11.064 [2024-07-25 02:39:57.827600] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.064 pt2 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.064 02:39:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.323 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.323 "name": "raid_bdev1", 00:15:11.323 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:11.323 "strip_size_kb": 0, 00:15:11.323 "state": "online", 00:15:11.323 "raid_level": "raid1", 00:15:11.323 "superblock": true, 00:15:11.323 "num_base_bdevs": 2, 00:15:11.323 "num_base_bdevs_discovered": 2, 00:15:11.323 "num_base_bdevs_operational": 2, 00:15:11.323 "base_bdevs_list": [ 00:15:11.323 { 00:15:11.323 "name": "pt1", 00:15:11.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.323 "is_configured": true, 00:15:11.323 "data_offset": 256, 00:15:11.323 "data_size": 7936 00:15:11.323 }, 00:15:11.323 { 00:15:11.323 "name": "pt2", 00:15:11.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.323 "is_configured": true, 00:15:11.323 "data_offset": 256, 00:15:11.323 "data_size": 7936 00:15:11.323 } 00:15:11.323 ] 00:15:11.323 }' 00:15:11.323 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.323 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.583 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:11.843 [2024-07-25 02:39:58.491518] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:11.843 "name": "raid_bdev1", 00:15:11.843 "aliases": [ 00:15:11.843 "2c74466f-4a2f-11ef-9c8e-7947904e2597" 00:15:11.843 ], 00:15:11.843 "product_name": "Raid Volume", 00:15:11.843 "block_size": 4128, 00:15:11.843 "num_blocks": 7936, 00:15:11.843 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:11.843 "md_size": 32, 00:15:11.843 "md_interleave": true, 00:15:11.843 "dif_type": 0, 00:15:11.843 "assigned_rate_limits": { 00:15:11.843 "rw_ios_per_sec": 0, 00:15:11.843 "rw_mbytes_per_sec": 0, 00:15:11.843 "r_mbytes_per_sec": 0, 00:15:11.843 "w_mbytes_per_sec": 0 00:15:11.843 }, 00:15:11.843 "claimed": false, 00:15:11.843 "zoned": false, 00:15:11.843 "supported_io_types": { 00:15:11.843 "read": true, 00:15:11.843 "write": true, 00:15:11.843 "unmap": false, 00:15:11.843 "flush": false, 00:15:11.843 "reset": true, 00:15:11.843 "nvme_admin": false, 00:15:11.843 "nvme_io": false, 00:15:11.843 "nvme_io_md": false, 00:15:11.843 "write_zeroes": true, 00:15:11.843 "zcopy": false, 00:15:11.843 "get_zone_info": false, 00:15:11.843 "zone_management": false, 00:15:11.843 "zone_append": false, 00:15:11.843 "compare": false, 00:15:11.843 "compare_and_write": false, 00:15:11.843 "abort": false, 00:15:11.843 "seek_hole": false, 00:15:11.843 "seek_data": false, 00:15:11.843 "copy": false, 00:15:11.843 "nvme_iov_md": false 00:15:11.843 }, 00:15:11.843 "memory_domains": [ 00:15:11.843 { 00:15:11.843 "dma_device_id": "system", 00:15:11.843 "dma_device_type": 1 00:15:11.843 }, 00:15:11.843 { 00:15:11.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.843 "dma_device_type": 2 00:15:11.843 }, 00:15:11.843 { 00:15:11.843 "dma_device_id": "system", 00:15:11.843 "dma_device_type": 1 00:15:11.843 }, 00:15:11.843 { 00:15:11.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.843 "dma_device_type": 2 00:15:11.843 } 00:15:11.843 ], 00:15:11.843 "driver_specific": { 00:15:11.843 "raid": { 00:15:11.843 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:11.843 "strip_size_kb": 0, 00:15:11.843 "state": "online", 00:15:11.843 "raid_level": "raid1", 00:15:11.843 "superblock": true, 00:15:11.843 "num_base_bdevs": 2, 00:15:11.843 "num_base_bdevs_discovered": 2, 00:15:11.843 "num_base_bdevs_operational": 2, 00:15:11.843 "base_bdevs_list": [ 00:15:11.843 { 00:15:11.843 "name": "pt1", 00:15:11.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.843 "is_configured": true, 00:15:11.843 "data_offset": 256, 00:15:11.843 "data_size": 7936 00:15:11.843 }, 00:15:11.843 { 00:15:11.843 "name": "pt2", 00:15:11.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.843 "is_configured": true, 00:15:11.843 "data_offset": 256, 00:15:11.843 "data_size": 7936 00:15:11.843 } 00:15:11.843 ] 00:15:11.843 } 00:15:11.843 } 00:15:11.843 }' 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:11.843 pt2' 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.843 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.843 "name": "pt1", 00:15:11.843 "aliases": [ 00:15:11.843 "00000000-0000-0000-0000-000000000001" 00:15:11.843 ], 00:15:11.843 "product_name": "passthru", 00:15:11.843 "block_size": 4128, 00:15:11.843 "num_blocks": 8192, 00:15:11.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.843 "md_size": 32, 00:15:11.843 "md_interleave": true, 00:15:11.843 "dif_type": 0, 00:15:11.843 "assigned_rate_limits": { 00:15:11.843 "rw_ios_per_sec": 0, 00:15:11.843 "rw_mbytes_per_sec": 0, 00:15:11.843 "r_mbytes_per_sec": 0, 00:15:11.843 "w_mbytes_per_sec": 0 00:15:11.843 }, 00:15:11.843 "claimed": true, 00:15:11.843 "claim_type": "exclusive_write", 00:15:11.843 "zoned": false, 00:15:11.843 "supported_io_types": { 00:15:11.843 "read": true, 00:15:11.843 "write": true, 00:15:11.844 "unmap": true, 00:15:11.844 "flush": true, 00:15:11.844 "reset": true, 00:15:11.844 "nvme_admin": false, 00:15:11.844 "nvme_io": false, 00:15:11.844 "nvme_io_md": false, 00:15:11.844 "write_zeroes": true, 00:15:11.844 "zcopy": true, 00:15:11.844 "get_zone_info": false, 00:15:11.844 "zone_management": false, 00:15:11.844 "zone_append": false, 00:15:11.844 "compare": false, 00:15:11.844 "compare_and_write": false, 00:15:11.844 "abort": true, 00:15:11.844 "seek_hole": false, 00:15:11.844 "seek_data": false, 00:15:11.844 "copy": true, 00:15:11.844 "nvme_iov_md": false 00:15:11.844 }, 00:15:11.844 "memory_domains": [ 00:15:11.844 { 00:15:11.844 "dma_device_id": "system", 00:15:11.844 "dma_device_type": 1 00:15:11.844 }, 00:15:11.844 { 00:15:11.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.844 "dma_device_type": 2 00:15:11.844 } 00:15:11.844 ], 00:15:11.844 "driver_specific": { 00:15:11.844 "passthru": { 00:15:11.844 "name": "pt1", 00:15:11.844 "base_bdev_name": "malloc1" 00:15:11.844 } 00:15:11.844 } 00:15:11.844 }' 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:11.844 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.103 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.103 "name": "pt2", 00:15:12.103 "aliases": [ 00:15:12.104 "00000000-0000-0000-0000-000000000002" 00:15:12.104 ], 00:15:12.104 "product_name": "passthru", 00:15:12.104 "block_size": 4128, 00:15:12.104 "num_blocks": 8192, 00:15:12.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.104 "md_size": 32, 00:15:12.104 "md_interleave": true, 00:15:12.104 "dif_type": 0, 00:15:12.104 "assigned_rate_limits": { 00:15:12.104 "rw_ios_per_sec": 0, 00:15:12.104 "rw_mbytes_per_sec": 0, 00:15:12.104 "r_mbytes_per_sec": 0, 00:15:12.104 "w_mbytes_per_sec": 0 00:15:12.104 }, 00:15:12.104 "claimed": true, 00:15:12.104 "claim_type": "exclusive_write", 00:15:12.104 "zoned": false, 00:15:12.104 "supported_io_types": { 00:15:12.104 "read": true, 00:15:12.104 "write": true, 00:15:12.104 "unmap": true, 00:15:12.104 "flush": true, 00:15:12.104 "reset": true, 00:15:12.104 "nvme_admin": false, 00:15:12.104 "nvme_io": false, 00:15:12.104 "nvme_io_md": false, 00:15:12.104 "write_zeroes": true, 00:15:12.104 "zcopy": true, 00:15:12.104 "get_zone_info": false, 00:15:12.104 "zone_management": false, 00:15:12.104 "zone_append": false, 00:15:12.104 "compare": false, 00:15:12.104 "compare_and_write": false, 00:15:12.104 "abort": true, 00:15:12.104 "seek_hole": false, 00:15:12.104 "seek_data": false, 00:15:12.104 "copy": true, 00:15:12.104 "nvme_iov_md": false 00:15:12.104 }, 00:15:12.104 "memory_domains": [ 00:15:12.104 { 00:15:12.104 "dma_device_id": "system", 00:15:12.104 "dma_device_type": 1 00:15:12.104 }, 00:15:12.104 { 00:15:12.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.104 "dma_device_type": 2 00:15:12.104 } 00:15:12.104 ], 00:15:12.104 "driver_specific": { 00:15:12.104 "passthru": { 00:15:12.104 "name": "pt2", 00:15:12.104 "base_bdev_name": "malloc2" 00:15:12.104 } 00:15:12.104 } 00:15:12.104 }' 00:15:12.104 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.104 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.104 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:15:12.104 02:39:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.363 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.363 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:15:12.363 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:12.364 [2024-07-25 02:39:59.235570] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 2c74466f-4a2f-11ef-9c8e-7947904e2597 '!=' 2c74466f-4a2f-11ef-9c8e-7947904e2597 ']' 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:15:12.364 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:12.623 [2024-07-25 02:39:59.419574] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.623 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.883 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.883 "name": "raid_bdev1", 00:15:12.883 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:12.883 "strip_size_kb": 0, 00:15:12.883 "state": "online", 00:15:12.883 "raid_level": "raid1", 00:15:12.883 "superblock": true, 00:15:12.883 "num_base_bdevs": 2, 00:15:12.883 "num_base_bdevs_discovered": 1, 00:15:12.883 "num_base_bdevs_operational": 1, 00:15:12.883 "base_bdevs_list": [ 00:15:12.883 { 00:15:12.883 "name": null, 00:15:12.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.883 "is_configured": false, 00:15:12.883 "data_offset": 256, 00:15:12.883 "data_size": 7936 00:15:12.883 }, 00:15:12.883 { 00:15:12.883 "name": "pt2", 00:15:12.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.883 "is_configured": true, 00:15:12.883 "data_offset": 256, 00:15:12.883 "data_size": 7936 00:15:12.883 } 00:15:12.883 ] 00:15:12.883 }' 00:15:12.883 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.883 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:13.143 02:39:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:13.402 [2024-07-25 02:40:00.107626] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.402 [2024-07-25 02:40:00.107644] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.402 [2024-07-25 02:40:00.107657] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.402 [2024-07-25 02:40:00.107666] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.402 [2024-07-25 02:40:00.107669] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524835180 name raid_bdev1, state offline 00:15:13.402 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.402 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:15:13.402 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:15:13.403 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:15:13.403 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:15:13.403 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:15:13.403 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:15:13.661 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.920 [2024-07-25 02:40:00.651675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.920 [2024-07-25 02:40:00.651708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.920 [2024-07-25 02:40:00.651716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834f00 00:15:13.920 [2024-07-25 02:40:00.651721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.920 [2024-07-25 02:40:00.652142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.920 [2024-07-25 02:40:00.652167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.920 [2024-07-25 02:40:00.652195] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.920 [2024-07-25 02:40:00.652204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.920 [2024-07-25 02:40:00.652218] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x347524835180 00:15:13.920 [2024-07-25 02:40:00.652221] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:13.920 [2024-07-25 02:40:00.652236] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x347524897e20 00:15:13.920 [2024-07-25 02:40:00.652245] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x347524835180 00:15:13.920 [2024-07-25 02:40:00.652248] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x347524835180 00:15:13.920 [2024-07-25 02:40:00.652256] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.920 pt2 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.920 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.179 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.179 "name": "raid_bdev1", 00:15:14.179 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:14.179 "strip_size_kb": 0, 00:15:14.179 "state": "online", 00:15:14.179 "raid_level": "raid1", 00:15:14.179 "superblock": true, 00:15:14.179 "num_base_bdevs": 2, 00:15:14.179 "num_base_bdevs_discovered": 1, 00:15:14.179 "num_base_bdevs_operational": 1, 00:15:14.179 "base_bdevs_list": [ 00:15:14.179 { 00:15:14.179 "name": null, 00:15:14.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.179 "is_configured": false, 00:15:14.179 "data_offset": 256, 00:15:14.179 "data_size": 7936 00:15:14.179 }, 00:15:14.179 { 00:15:14.179 "name": "pt2", 00:15:14.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.179 "is_configured": true, 00:15:14.179 "data_offset": 256, 00:15:14.179 "data_size": 7936 00:15:14.179 } 00:15:14.179 ] 00:15:14.179 }' 00:15:14.179 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.179 02:40:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:14.438 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:14.438 [2024-07-25 02:40:01.303735] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.438 [2024-07-25 02:40:01.303752] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.438 [2024-07-25 02:40:01.303767] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.438 [2024-07-25 02:40:01.303776] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.438 [2024-07-25 02:40:01.303779] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524835180 name raid_bdev1, state offline 00:15:14.438 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:15:14.438 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.697 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:15:14.697 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:15:14.697 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:15:14.697 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.956 [2024-07-25 02:40:01.671770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.956 [2024-07-25 02:40:01.671800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.956 [2024-07-25 02:40:01.671808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x347524834c80 00:15:14.956 [2024-07-25 02:40:01.671813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.956 [2024-07-25 02:40:01.672211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.956 [2024-07-25 02:40:01.672233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.956 [2024-07-25 02:40:01.672245] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:14.956 [2024-07-25 02:40:01.672254] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.956 [2024-07-25 02:40:01.672278] bdev_raid.c:3641:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:14.956 [2024-07-25 02:40:01.672282] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.956 [2024-07-25 02:40:01.672288] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524834780 name raid_bdev1, state configuring 00:15:14.956 [2024-07-25 02:40:01.672297] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.956 [2024-07-25 02:40:01.672311] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x347524834780 00:15:14.956 [2024-07-25 02:40:01.672314] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:14.956 [2024-07-25 02:40:01.672333] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x347524897e20 00:15:14.956 [2024-07-25 02:40:01.672345] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x347524834780 00:15:14.956 [2024-07-25 02:40:01.672349] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x347524834780 00:15:14.956 [2024-07-25 02:40:01.672359] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.956 pt1 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.956 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.215 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.215 "name": "raid_bdev1", 00:15:15.215 "uuid": "2c74466f-4a2f-11ef-9c8e-7947904e2597", 00:15:15.215 "strip_size_kb": 0, 00:15:15.215 "state": "online", 00:15:15.215 "raid_level": "raid1", 00:15:15.215 "superblock": true, 00:15:15.215 "num_base_bdevs": 2, 00:15:15.215 "num_base_bdevs_discovered": 1, 00:15:15.215 "num_base_bdevs_operational": 1, 00:15:15.215 "base_bdevs_list": [ 00:15:15.215 { 00:15:15.215 "name": null, 00:15:15.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.215 "is_configured": false, 00:15:15.215 "data_offset": 256, 00:15:15.215 "data_size": 7936 00:15:15.215 }, 00:15:15.215 { 00:15:15.215 "name": "pt2", 00:15:15.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.215 "is_configured": true, 00:15:15.215 "data_offset": 256, 00:15:15.215 "data_size": 7936 00:15:15.215 } 00:15:15.215 ] 00:15:15.215 }' 00:15:15.215 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.215 02:40:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.475 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:15.475 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:15.475 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:15:15.475 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:15.475 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:15:15.733 [2024-07-25 02:40:02.519880] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2c74466f-4a2f-11ef-9c8e-7947904e2597 '!=' 2c74466f-4a2f-11ef-9c8e-7947904e2597 ']' 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 66560 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66560 ']' 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66560 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66560 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:15:15.733 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:15.734 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:15.734 killing process with pid 66560 00:15:15.734 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66560' 00:15:15.734 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 66560 00:15:15.734 [2024-07-25 02:40:02.565175] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.734 [2024-07-25 02:40:02.565191] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.734 [2024-07-25 02:40:02.565208] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.734 [2024-07-25 02:40:02.565212] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x347524834780 name raid_bdev1, state offline 00:15:15.734 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 66560 00:15:15.734 [2024-07-25 02:40:02.574741] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.993 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:15:15.993 00:15:15.993 real 0m10.426s 00:15:15.993 user 0m18.144s 00:15:15.993 sys 0m2.054s 00:15:15.993 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.993 02:40:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.993 ************************************ 00:15:15.993 END TEST raid_superblock_test_md_interleaved 00:15:15.993 ************************************ 00:15:15.993 02:40:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:15.993 02:40:02 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:15.993 02:40:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:15.993 02:40:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.993 02:40:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.993 ************************************ 00:15:15.993 START TEST raid_rebuild_test_sb_md_interleaved 00:15:15.993 ************************************ 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:15:15.993 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=66939 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 66939 /var/tmp/spdk-raid.sock 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66939 ']' 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.994 02:40:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:15.994 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.994 Zero copy mechanism will not be used. 00:15:15.994 [2024-07-25 02:40:02.830100] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:15.994 [2024-07-25 02:40:02.830456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:16.563 EAL: TSC is not safe to use in SMP mode 00:15:16.563 EAL: TSC is not invariant 00:15:16.563 [2024-07-25 02:40:03.266497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.563 [2024-07-25 02:40:03.344942] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:16.563 [2024-07-25 02:40:03.346526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.563 [2024-07-25 02:40:03.347044] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.563 [2024-07-25 02:40:03.347055] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.131 02:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.131 02:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:15:17.131 02:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:15:17.131 02:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:17.131 BaseBdev1_malloc 00:15:17.131 02:40:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:17.389 [2024-07-25 02:40:04.105884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:17.389 [2024-07-25 02:40:04.105917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.389 [2024-07-25 02:40:04.106363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587034780 00:15:17.389 [2024-07-25 02:40:04.106387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.389 [2024-07-25 02:40:04.106720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.389 [2024-07-25 02:40:04.106747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.389 BaseBdev1 00:15:17.389 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:15:17.389 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:17.389 BaseBdev2_malloc 00:15:17.648 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:17.648 [2024-07-25 02:40:04.477912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:17.648 [2024-07-25 02:40:04.477943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.648 [2024-07-25 02:40:04.477960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587034c80 00:15:17.648 [2024-07-25 02:40:04.477966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.648 [2024-07-25 02:40:04.478194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.648 [2024-07-25 02:40:04.478203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:17.648 BaseBdev2 00:15:17.648 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:17.907 spare_malloc 00:15:17.907 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:18.165 spare_delay 00:15:18.165 02:40:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:18.165 [2024-07-25 02:40:05.025956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.165 [2024-07-25 02:40:05.025983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.165 [2024-07-25 02:40:05.025995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587035400 00:15:18.165 [2024-07-25 02:40:05.026000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.165 [2024-07-25 02:40:05.026226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.165 [2024-07-25 02:40:05.026235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.165 spare 00:15:18.165 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:15:18.424 [2024-07-25 02:40:05.217981] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.424 [2024-07-25 02:40:05.218204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.424 [2024-07-25 02:40:05.218244] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16d587035680 00:15:18.424 [2024-07-25 02:40:05.218248] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:18.424 [2024-07-25 02:40:05.218266] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097e20 00:15:18.424 [2024-07-25 02:40:05.218275] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16d587035680 00:15:18.424 [2024-07-25 02:40:05.218278] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16d587035680 00:15:18.424 [2024-07-25 02:40:05.218285] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.424 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.683 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.683 "name": "raid_bdev1", 00:15:18.683 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:18.683 "strip_size_kb": 0, 00:15:18.683 "state": "online", 00:15:18.683 "raid_level": "raid1", 00:15:18.683 "superblock": true, 00:15:18.683 "num_base_bdevs": 2, 00:15:18.683 "num_base_bdevs_discovered": 2, 00:15:18.683 "num_base_bdevs_operational": 2, 00:15:18.683 "base_bdevs_list": [ 00:15:18.683 { 00:15:18.683 "name": "BaseBdev1", 00:15:18.683 "uuid": "8587ae95-1bae-de5e-9baf-7dae546b705c", 00:15:18.683 "is_configured": true, 00:15:18.683 "data_offset": 256, 00:15:18.683 "data_size": 7936 00:15:18.683 }, 00:15:18.683 { 00:15:18.683 "name": "BaseBdev2", 00:15:18.683 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:18.683 "is_configured": true, 00:15:18.683 "data_offset": 256, 00:15:18.683 "data_size": 7936 00:15:18.683 } 00:15:18.683 ] 00:15:18.683 }' 00:15:18.683 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.683 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:18.942 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:18.942 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:15:19.200 [2024-07-25 02:40:05.886061] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.200 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:15:19.200 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:19.201 02:40:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.201 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:15:19.201 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:15:19.201 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:15:19.201 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:15:19.459 [2024-07-25 02:40:06.226059] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.459 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.718 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.718 "name": "raid_bdev1", 00:15:19.718 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:19.718 "strip_size_kb": 0, 00:15:19.718 "state": "online", 00:15:19.718 "raid_level": "raid1", 00:15:19.718 "superblock": true, 00:15:19.718 "num_base_bdevs": 2, 00:15:19.718 "num_base_bdevs_discovered": 1, 00:15:19.718 "num_base_bdevs_operational": 1, 00:15:19.718 "base_bdevs_list": [ 00:15:19.718 { 00:15:19.718 "name": null, 00:15:19.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.718 "is_configured": false, 00:15:19.718 "data_offset": 256, 00:15:19.718 "data_size": 7936 00:15:19.718 }, 00:15:19.718 { 00:15:19.718 "name": "BaseBdev2", 00:15:19.718 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:19.718 "is_configured": true, 00:15:19.718 "data_offset": 256, 00:15:19.718 "data_size": 7936 00:15:19.718 } 00:15:19.718 ] 00:15:19.718 }' 00:15:19.718 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.718 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:19.976 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.235 [2024-07-25 02:40:06.882119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.235 [2024-07-25 02:40:06.882225] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097ec0 00:15:20.235 [2024-07-25 02:40:06.882852] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.235 02:40:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.172 02:40:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.432 "name": "raid_bdev1", 00:15:21.432 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:21.432 "strip_size_kb": 0, 00:15:21.432 "state": "online", 00:15:21.432 "raid_level": "raid1", 00:15:21.432 "superblock": true, 00:15:21.432 "num_base_bdevs": 2, 00:15:21.432 "num_base_bdevs_discovered": 2, 00:15:21.432 "num_base_bdevs_operational": 2, 00:15:21.432 "process": { 00:15:21.432 "type": "rebuild", 00:15:21.432 "target": "spare", 00:15:21.432 "progress": { 00:15:21.432 "blocks": 3072, 00:15:21.432 "percent": 38 00:15:21.432 } 00:15:21.432 }, 00:15:21.432 "base_bdevs_list": [ 00:15:21.432 { 00:15:21.432 "name": "spare", 00:15:21.432 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:21.432 "is_configured": true, 00:15:21.432 "data_offset": 256, 00:15:21.432 "data_size": 7936 00:15:21.432 }, 00:15:21.432 { 00:15:21.432 "name": "BaseBdev2", 00:15:21.432 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:21.432 "is_configured": true, 00:15:21.432 "data_offset": 256, 00:15:21.432 "data_size": 7936 00:15:21.432 } 00:15:21.432 ] 00:15:21.432 }' 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.432 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:15:21.692 [2024-07-25 02:40:08.340394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.692 [2024-07-25 02:40:08.388393] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:15:21.692 [2024-07-25 02:40:08.388428] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.692 [2024-07-25 02:40:08.388432] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.692 [2024-07-25 02:40:08.388434] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.692 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.951 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.951 "name": "raid_bdev1", 00:15:21.951 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:21.951 "strip_size_kb": 0, 00:15:21.951 "state": "online", 00:15:21.951 "raid_level": "raid1", 00:15:21.951 "superblock": true, 00:15:21.951 "num_base_bdevs": 2, 00:15:21.951 "num_base_bdevs_discovered": 1, 00:15:21.951 "num_base_bdevs_operational": 1, 00:15:21.951 "base_bdevs_list": [ 00:15:21.951 { 00:15:21.951 "name": null, 00:15:21.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.951 "is_configured": false, 00:15:21.951 "data_offset": 256, 00:15:21.951 "data_size": 7936 00:15:21.951 }, 00:15:21.951 { 00:15:21.951 "name": "BaseBdev2", 00:15:21.952 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:21.952 "is_configured": true, 00:15:21.952 "data_offset": 256, 00:15:21.952 "data_size": 7936 00:15:21.952 } 00:15:21.952 ] 00:15:21.952 }' 00:15:21.952 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.952 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.211 02:40:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.211 "name": "raid_bdev1", 00:15:22.211 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:22.211 "strip_size_kb": 0, 00:15:22.211 "state": "online", 00:15:22.211 "raid_level": "raid1", 00:15:22.211 "superblock": true, 00:15:22.211 "num_base_bdevs": 2, 00:15:22.211 "num_base_bdevs_discovered": 1, 00:15:22.211 "num_base_bdevs_operational": 1, 00:15:22.211 "base_bdevs_list": [ 00:15:22.211 { 00:15:22.211 "name": null, 00:15:22.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.211 "is_configured": false, 00:15:22.211 "data_offset": 256, 00:15:22.211 "data_size": 7936 00:15:22.211 }, 00:15:22.211 { 00:15:22.211 "name": "BaseBdev2", 00:15:22.211 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:22.211 "is_configured": true, 00:15:22.211 "data_offset": 256, 00:15:22.211 "data_size": 7936 00:15:22.211 } 00:15:22.211 ] 00:15:22.211 }' 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:22.211 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.471 [2024-07-25 02:40:09.256525] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.471 [2024-07-25 02:40:09.256642] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097e20 00:15:22.471 [2024-07-25 02:40:09.257266] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.471 02:40:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.854 "name": "raid_bdev1", 00:15:23.854 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:23.854 "strip_size_kb": 0, 00:15:23.854 "state": "online", 00:15:23.854 "raid_level": "raid1", 00:15:23.854 "superblock": true, 00:15:23.854 "num_base_bdevs": 2, 00:15:23.854 "num_base_bdevs_discovered": 2, 00:15:23.854 "num_base_bdevs_operational": 2, 00:15:23.854 "process": { 00:15:23.854 "type": "rebuild", 00:15:23.854 "target": "spare", 00:15:23.854 "progress": { 00:15:23.854 "blocks": 3072, 00:15:23.854 "percent": 38 00:15:23.854 } 00:15:23.854 }, 00:15:23.854 "base_bdevs_list": [ 00:15:23.854 { 00:15:23.854 "name": "spare", 00:15:23.854 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 256, 00:15:23.854 "data_size": 7936 00:15:23.854 }, 00:15:23.854 { 00:15:23.854 "name": "BaseBdev2", 00:15:23.854 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:23.854 "is_configured": true, 00:15:23.854 "data_offset": 256, 00:15:23.854 "data_size": 7936 00:15:23.854 } 00:15:23.854 ] 00:15:23.854 }' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:15:23.854 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=555 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:23.854 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.855 "name": "raid_bdev1", 00:15:23.855 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:23.855 "strip_size_kb": 0, 00:15:23.855 "state": "online", 00:15:23.855 "raid_level": "raid1", 00:15:23.855 "superblock": true, 00:15:23.855 "num_base_bdevs": 2, 00:15:23.855 "num_base_bdevs_discovered": 2, 00:15:23.855 "num_base_bdevs_operational": 2, 00:15:23.855 "process": { 00:15:23.855 "type": "rebuild", 00:15:23.855 "target": "spare", 00:15:23.855 "progress": { 00:15:23.855 "blocks": 3584, 00:15:23.855 "percent": 45 00:15:23.855 } 00:15:23.855 }, 00:15:23.855 "base_bdevs_list": [ 00:15:23.855 { 00:15:23.855 "name": "spare", 00:15:23.855 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:23.855 "is_configured": true, 00:15:23.855 "data_offset": 256, 00:15:23.855 "data_size": 7936 00:15:23.855 }, 00:15:23.855 { 00:15:23.855 "name": "BaseBdev2", 00:15:23.855 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:23.855 "is_configured": true, 00:15:23.855 "data_offset": 256, 00:15:23.855 "data_size": 7936 00:15:23.855 } 00:15:23.855 ] 00:15:23.855 }' 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.855 02:40:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.237 02:40:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.237 "name": "raid_bdev1", 00:15:25.237 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:25.237 "strip_size_kb": 0, 00:15:25.237 "state": "online", 00:15:25.237 "raid_level": "raid1", 00:15:25.237 "superblock": true, 00:15:25.237 "num_base_bdevs": 2, 00:15:25.237 "num_base_bdevs_discovered": 2, 00:15:25.237 "num_base_bdevs_operational": 2, 00:15:25.237 "process": { 00:15:25.237 "type": "rebuild", 00:15:25.237 "target": "spare", 00:15:25.237 "progress": { 00:15:25.237 "blocks": 6912, 00:15:25.237 "percent": 87 00:15:25.237 } 00:15:25.237 }, 00:15:25.237 "base_bdevs_list": [ 00:15:25.237 { 00:15:25.237 "name": "spare", 00:15:25.237 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:25.237 "is_configured": true, 00:15:25.237 "data_offset": 256, 00:15:25.237 "data_size": 7936 00:15:25.237 }, 00:15:25.237 { 00:15:25.237 "name": "BaseBdev2", 00:15:25.237 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:25.237 "is_configured": true, 00:15:25.237 "data_offset": 256, 00:15:25.237 "data_size": 7936 00:15:25.237 } 00:15:25.237 ] 00:15:25.237 }' 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.237 02:40:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:15:25.497 [2024-07-25 02:40:12.368660] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:25.497 [2024-07-25 02:40:12.368684] bdev_raid.c:2534:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:25.497 [2024-07-25 02:40:12.368722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.437 "name": "raid_bdev1", 00:15:26.437 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:26.437 "strip_size_kb": 0, 00:15:26.437 "state": "online", 00:15:26.437 "raid_level": "raid1", 00:15:26.437 "superblock": true, 00:15:26.437 "num_base_bdevs": 2, 00:15:26.437 "num_base_bdevs_discovered": 2, 00:15:26.437 "num_base_bdevs_operational": 2, 00:15:26.437 "base_bdevs_list": [ 00:15:26.437 { 00:15:26.437 "name": "spare", 00:15:26.437 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:26.437 "is_configured": true, 00:15:26.437 "data_offset": 256, 00:15:26.437 "data_size": 7936 00:15:26.437 }, 00:15:26.437 { 00:15:26.437 "name": "BaseBdev2", 00:15:26.437 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:26.437 "is_configured": true, 00:15:26.437 "data_offset": 256, 00:15:26.437 "data_size": 7936 00:15:26.437 } 00:15:26.437 ] 00:15:26.437 }' 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:26.437 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.696 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.696 "name": "raid_bdev1", 00:15:26.696 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:26.696 "strip_size_kb": 0, 00:15:26.696 "state": "online", 00:15:26.696 "raid_level": "raid1", 00:15:26.696 "superblock": true, 00:15:26.696 "num_base_bdevs": 2, 00:15:26.696 "num_base_bdevs_discovered": 2, 00:15:26.696 "num_base_bdevs_operational": 2, 00:15:26.696 "base_bdevs_list": [ 00:15:26.696 { 00:15:26.696 "name": "spare", 00:15:26.696 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:26.696 "is_configured": true, 00:15:26.696 "data_offset": 256, 00:15:26.697 "data_size": 7936 00:15:26.697 }, 00:15:26.697 { 00:15:26.697 "name": "BaseBdev2", 00:15:26.697 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:26.697 "is_configured": true, 00:15:26.697 "data_offset": 256, 00:15:26.697 "data_size": 7936 00:15:26.697 } 00:15:26.697 ] 00:15:26.697 }' 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.697 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.956 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.956 "name": "raid_bdev1", 00:15:26.956 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:26.956 "strip_size_kb": 0, 00:15:26.956 "state": "online", 00:15:26.956 "raid_level": "raid1", 00:15:26.956 "superblock": true, 00:15:26.956 "num_base_bdevs": 2, 00:15:26.956 "num_base_bdevs_discovered": 2, 00:15:26.956 "num_base_bdevs_operational": 2, 00:15:26.956 "base_bdevs_list": [ 00:15:26.956 { 00:15:26.956 "name": "spare", 00:15:26.956 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:26.956 "is_configured": true, 00:15:26.956 "data_offset": 256, 00:15:26.956 "data_size": 7936 00:15:26.956 }, 00:15:26.956 { 00:15:26.956 "name": "BaseBdev2", 00:15:26.956 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:26.956 "is_configured": true, 00:15:26.956 "data_offset": 256, 00:15:26.956 "data_size": 7936 00:15:26.956 } 00:15:26.956 ] 00:15:26.956 }' 00:15:26.956 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.956 02:40:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:27.216 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:27.475 [2024-07-25 02:40:14.224867] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.475 [2024-07-25 02:40:14.224884] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.475 [2024-07-25 02:40:14.224905] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.475 [2024-07-25 02:40:14.224918] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.475 [2024-07-25 02:40:14.224921] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16d587035680 name raid_bdev1, state offline 00:15:27.475 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.475 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:15:27.735 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:15:27.735 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:15:27.735 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:15:27.735 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:15:27.735 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:27.998 [2024-07-25 02:40:14.792919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:27.998 [2024-07-25 02:40:14.792955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.998 [2024-07-25 02:40:14.792994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587035400 00:15:27.998 [2024-07-25 02:40:14.793000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.998 [2024-07-25 02:40:14.793478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.998 [2024-07-25 02:40:14.793503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:27.998 [2024-07-25 02:40:14.793519] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:27.998 [2024-07-25 02:40:14.793529] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:27.998 [2024-07-25 02:40:14.793549] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.998 spare 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.998 02:40:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.998 [2024-07-25 02:40:14.893543] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x16d587035680 00:15:27.998 [2024-07-25 02:40:14.893553] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:27.998 [2024-07-25 02:40:14.893589] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097e20 00:15:27.998 [2024-07-25 02:40:14.893600] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16d587035680 00:15:27.998 [2024-07-25 02:40:14.893603] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16d587035680 00:15:27.998 [2024-07-25 02:40:14.893619] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.290 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.290 "name": "raid_bdev1", 00:15:28.290 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:28.290 "strip_size_kb": 0, 00:15:28.290 "state": "online", 00:15:28.290 "raid_level": "raid1", 00:15:28.290 "superblock": true, 00:15:28.290 "num_base_bdevs": 2, 00:15:28.290 "num_base_bdevs_discovered": 2, 00:15:28.290 "num_base_bdevs_operational": 2, 00:15:28.290 "base_bdevs_list": [ 00:15:28.290 { 00:15:28.290 "name": "spare", 00:15:28.290 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:28.290 "is_configured": true, 00:15:28.290 "data_offset": 256, 00:15:28.290 "data_size": 7936 00:15:28.290 }, 00:15:28.290 { 00:15:28.290 "name": "BaseBdev2", 00:15:28.290 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:28.290 "is_configured": true, 00:15:28.290 "data_offset": 256, 00:15:28.290 "data_size": 7936 00:15:28.290 } 00:15:28.290 ] 00:15:28.290 }' 00:15:28.290 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.290 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.557 "name": "raid_bdev1", 00:15:28.557 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:28.557 "strip_size_kb": 0, 00:15:28.557 "state": "online", 00:15:28.557 "raid_level": "raid1", 00:15:28.557 "superblock": true, 00:15:28.557 "num_base_bdevs": 2, 00:15:28.557 "num_base_bdevs_discovered": 2, 00:15:28.557 "num_base_bdevs_operational": 2, 00:15:28.557 "base_bdevs_list": [ 00:15:28.557 { 00:15:28.557 "name": "spare", 00:15:28.557 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:28.557 "is_configured": true, 00:15:28.557 "data_offset": 256, 00:15:28.557 "data_size": 7936 00:15:28.557 }, 00:15:28.557 { 00:15:28.557 "name": "BaseBdev2", 00:15:28.557 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:28.557 "is_configured": true, 00:15:28.557 "data_offset": 256, 00:15:28.557 "data_size": 7936 00:15:28.557 } 00:15:28.557 ] 00:15:28.557 }' 00:15:28.557 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:28.833 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:28.833 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:28.833 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:28.834 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:28.834 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.834 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.834 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:15:29.093 [2024-07-25 02:40:15.825021] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.093 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.093 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.094 02:40:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.353 02:40:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.353 "name": "raid_bdev1", 00:15:29.353 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:29.353 "strip_size_kb": 0, 00:15:29.353 "state": "online", 00:15:29.353 "raid_level": "raid1", 00:15:29.353 "superblock": true, 00:15:29.353 "num_base_bdevs": 2, 00:15:29.353 "num_base_bdevs_discovered": 1, 00:15:29.353 "num_base_bdevs_operational": 1, 00:15:29.353 "base_bdevs_list": [ 00:15:29.353 { 00:15:29.353 "name": null, 00:15:29.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.353 "is_configured": false, 00:15:29.353 "data_offset": 256, 00:15:29.353 "data_size": 7936 00:15:29.353 }, 00:15:29.353 { 00:15:29.353 "name": "BaseBdev2", 00:15:29.353 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:29.353 "is_configured": true, 00:15:29.353 "data_offset": 256, 00:15:29.353 "data_size": 7936 00:15:29.353 } 00:15:29.353 ] 00:15:29.353 }' 00:15:29.353 02:40:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.353 02:40:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:29.613 02:40:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.613 [2024-07-25 02:40:16.489085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.613 [2024-07-25 02:40:16.489135] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.613 [2024-07-25 02:40:16.489139] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.613 [2024-07-25 02:40:16.489162] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.613 [2024-07-25 02:40:16.489249] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097ec0 00:15:29.613 [2024-07-25 02:40:16.489683] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.613 02:40:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:15:30.992 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.992 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:30.992 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:30.992 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:30.992 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:30.993 "name": "raid_bdev1", 00:15:30.993 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:30.993 "strip_size_kb": 0, 00:15:30.993 "state": "online", 00:15:30.993 "raid_level": "raid1", 00:15:30.993 "superblock": true, 00:15:30.993 "num_base_bdevs": 2, 00:15:30.993 "num_base_bdevs_discovered": 2, 00:15:30.993 "num_base_bdevs_operational": 2, 00:15:30.993 "process": { 00:15:30.993 "type": "rebuild", 00:15:30.993 "target": "spare", 00:15:30.993 "progress": { 00:15:30.993 "blocks": 3072, 00:15:30.993 "percent": 38 00:15:30.993 } 00:15:30.993 }, 00:15:30.993 "base_bdevs_list": [ 00:15:30.993 { 00:15:30.993 "name": "spare", 00:15:30.993 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:30.993 "is_configured": true, 00:15:30.993 "data_offset": 256, 00:15:30.993 "data_size": 7936 00:15:30.993 }, 00:15:30.993 { 00:15:30.993 "name": "BaseBdev2", 00:15:30.993 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:30.993 "is_configured": true, 00:15:30.993 "data_offset": 256, 00:15:30.993 "data_size": 7936 00:15:30.993 } 00:15:30.993 ] 00:15:30.993 }' 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.993 02:40:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:15:31.252 [2024-07-25 02:40:17.935318] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.252 [2024-07-25 02:40:17.995269] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:15:31.252 [2024-07-25 02:40:17.995296] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.252 [2024-07-25 02:40:17.995300] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.252 [2024-07-25 02:40:17.995303] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.252 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.512 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.512 "name": "raid_bdev1", 00:15:31.512 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:31.512 "strip_size_kb": 0, 00:15:31.512 "state": "online", 00:15:31.512 "raid_level": "raid1", 00:15:31.512 "superblock": true, 00:15:31.512 "num_base_bdevs": 2, 00:15:31.512 "num_base_bdevs_discovered": 1, 00:15:31.512 "num_base_bdevs_operational": 1, 00:15:31.512 "base_bdevs_list": [ 00:15:31.512 { 00:15:31.512 "name": null, 00:15:31.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.512 "is_configured": false, 00:15:31.512 "data_offset": 256, 00:15:31.512 "data_size": 7936 00:15:31.512 }, 00:15:31.512 { 00:15:31.512 "name": "BaseBdev2", 00:15:31.512 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:31.512 "is_configured": true, 00:15:31.512 "data_offset": 256, 00:15:31.512 "data_size": 7936 00:15:31.512 } 00:15:31.512 ] 00:15:31.512 }' 00:15:31.512 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.512 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:31.771 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:31.772 [2024-07-25 02:40:18.655373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.772 [2024-07-25 02:40:18.655408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.772 [2024-07-25 02:40:18.655446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587035400 00:15:31.772 [2024-07-25 02:40:18.655452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.772 [2024-07-25 02:40:18.655501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.772 [2024-07-25 02:40:18.655507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.772 [2024-07-25 02:40:18.655521] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.772 [2024-07-25 02:40:18.655525] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:31.772 [2024-07-25 02:40:18.655528] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.772 [2024-07-25 02:40:18.655535] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.772 [2024-07-25 02:40:18.655606] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16d587097e20 00:15:31.772 [2024-07-25 02:40:18.656010] bdev_raid.c:2906:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.772 spare 00:15:32.032 02:40:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.972 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.232 "name": "raid_bdev1", 00:15:33.232 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:33.232 "strip_size_kb": 0, 00:15:33.232 "state": "online", 00:15:33.232 "raid_level": "raid1", 00:15:33.232 "superblock": true, 00:15:33.232 "num_base_bdevs": 2, 00:15:33.232 "num_base_bdevs_discovered": 2, 00:15:33.232 "num_base_bdevs_operational": 2, 00:15:33.232 "process": { 00:15:33.232 "type": "rebuild", 00:15:33.232 "target": "spare", 00:15:33.232 "progress": { 00:15:33.232 "blocks": 3072, 00:15:33.232 "percent": 38 00:15:33.232 } 00:15:33.232 }, 00:15:33.232 "base_bdevs_list": [ 00:15:33.232 { 00:15:33.232 "name": "spare", 00:15:33.232 "uuid": "1c2f9d2e-259b-e85a-8039-70bd96fb91c8", 00:15:33.232 "is_configured": true, 00:15:33.232 "data_offset": 256, 00:15:33.232 "data_size": 7936 00:15:33.232 }, 00:15:33.232 { 00:15:33.232 "name": "BaseBdev2", 00:15:33.232 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:33.232 "is_configured": true, 00:15:33.232 "data_offset": 256, 00:15:33.232 "data_size": 7936 00:15:33.232 } 00:15:33.232 ] 00:15:33.232 }' 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.232 02:40:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:15:33.491 [2024-07-25 02:40:20.162010] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.491 [2024-07-25 02:40:20.261997] bdev_raid.c:2544:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:15:33.491 [2024-07-25 02:40:20.262033] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.491 [2024-07-25 02:40:20.262037] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.491 [2024-07-25 02:40:20.262040] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.491 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.749 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.749 "name": "raid_bdev1", 00:15:33.749 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:33.749 "strip_size_kb": 0, 00:15:33.749 "state": "online", 00:15:33.749 "raid_level": "raid1", 00:15:33.749 "superblock": true, 00:15:33.749 "num_base_bdevs": 2, 00:15:33.749 "num_base_bdevs_discovered": 1, 00:15:33.749 "num_base_bdevs_operational": 1, 00:15:33.749 "base_bdevs_list": [ 00:15:33.749 { 00:15:33.749 "name": null, 00:15:33.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.749 "is_configured": false, 00:15:33.749 "data_offset": 256, 00:15:33.749 "data_size": 7936 00:15:33.749 }, 00:15:33.749 { 00:15:33.749 "name": "BaseBdev2", 00:15:33.749 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:33.749 "is_configured": true, 00:15:33.749 "data_offset": 256, 00:15:33.749 "data_size": 7936 00:15:33.749 } 00:15:33.749 ] 00:15:33.749 }' 00:15:33.749 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.749 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.010 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.269 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.269 "name": "raid_bdev1", 00:15:34.269 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:34.269 "strip_size_kb": 0, 00:15:34.269 "state": "online", 00:15:34.269 "raid_level": "raid1", 00:15:34.269 "superblock": true, 00:15:34.269 "num_base_bdevs": 2, 00:15:34.269 "num_base_bdevs_discovered": 1, 00:15:34.269 "num_base_bdevs_operational": 1, 00:15:34.269 "base_bdevs_list": [ 00:15:34.269 { 00:15:34.269 "name": null, 00:15:34.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.269 "is_configured": false, 00:15:34.269 "data_offset": 256, 00:15:34.269 "data_size": 7936 00:15:34.269 }, 00:15:34.269 { 00:15:34.269 "name": "BaseBdev2", 00:15:34.269 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:34.269 "is_configured": true, 00:15:34.269 "data_offset": 256, 00:15:34.269 "data_size": 7936 00:15:34.269 } 00:15:34.269 ] 00:15:34.269 }' 00:15:34.269 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:34.269 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:34.269 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:34.269 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:34.270 02:40:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:15:34.270 02:40:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:34.529 [2024-07-25 02:40:21.298097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:34.529 [2024-07-25 02:40:21.298138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.529 [2024-07-25 02:40:21.298177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d587034780 00:15:34.529 [2024-07-25 02:40:21.298184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.529 [2024-07-25 02:40:21.298229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.529 [2024-07-25 02:40:21.298236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:34.529 [2024-07-25 02:40:21.298248] bdev_raid.c:3844:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:34.529 [2024-07-25 02:40:21.298252] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.529 [2024-07-25 02:40:21.298255] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:34.529 BaseBdev1 00:15:34.529 02:40:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.473 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.733 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.733 "name": "raid_bdev1", 00:15:35.733 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:35.733 "strip_size_kb": 0, 00:15:35.733 "state": "online", 00:15:35.733 "raid_level": "raid1", 00:15:35.733 "superblock": true, 00:15:35.733 "num_base_bdevs": 2, 00:15:35.733 "num_base_bdevs_discovered": 1, 00:15:35.733 "num_base_bdevs_operational": 1, 00:15:35.733 "base_bdevs_list": [ 00:15:35.733 { 00:15:35.733 "name": null, 00:15:35.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.733 "is_configured": false, 00:15:35.733 "data_offset": 256, 00:15:35.733 "data_size": 7936 00:15:35.733 }, 00:15:35.733 { 00:15:35.733 "name": "BaseBdev2", 00:15:35.733 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:35.733 "is_configured": true, 00:15:35.733 "data_offset": 256, 00:15:35.733 "data_size": 7936 00:15:35.733 } 00:15:35.733 ] 00:15:35.733 }' 00:15:35.733 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.733 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.993 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.253 02:40:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.253 "name": "raid_bdev1", 00:15:36.253 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:36.253 "strip_size_kb": 0, 00:15:36.253 "state": "online", 00:15:36.253 "raid_level": "raid1", 00:15:36.253 "superblock": true, 00:15:36.253 "num_base_bdevs": 2, 00:15:36.253 "num_base_bdevs_discovered": 1, 00:15:36.253 "num_base_bdevs_operational": 1, 00:15:36.253 "base_bdevs_list": [ 00:15:36.253 { 00:15:36.253 "name": null, 00:15:36.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.253 "is_configured": false, 00:15:36.253 "data_offset": 256, 00:15:36.253 "data_size": 7936 00:15:36.253 }, 00:15:36.253 { 00:15:36.253 "name": "BaseBdev2", 00:15:36.253 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:36.253 "is_configured": true, 00:15:36.253 "data_offset": 256, 00:15:36.253 "data_size": 7936 00:15:36.253 } 00:15:36.253 ] 00:15:36.253 }' 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:36.253 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:36.512 [2024-07-25 02:40:23.198282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.512 [2024-07-25 02:40:23.198334] bdev_raid.c:3656:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.512 [2024-07-25 02:40:23.198338] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:36.512 request: 00:15:36.512 { 00:15:36.512 "base_bdev": "BaseBdev1", 00:15:36.512 "raid_bdev": "raid_bdev1", 00:15:36.512 "method": "bdev_raid_add_base_bdev", 00:15:36.512 "req_id": 1 00:15:36.512 } 00:15:36.512 Got JSON-RPC error response 00:15:36.512 response: 00:15:36.512 { 00:15:36.512 "code": -22, 00:15:36.512 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:36.512 } 00:15:36.512 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:15:36.512 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:36.512 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:36.512 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:36.512 02:40:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.451 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.710 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.710 "name": "raid_bdev1", 00:15:37.710 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:37.710 "strip_size_kb": 0, 00:15:37.710 "state": "online", 00:15:37.710 "raid_level": "raid1", 00:15:37.710 "superblock": true, 00:15:37.710 "num_base_bdevs": 2, 00:15:37.710 "num_base_bdevs_discovered": 1, 00:15:37.710 "num_base_bdevs_operational": 1, 00:15:37.710 "base_bdevs_list": [ 00:15:37.710 { 00:15:37.710 "name": null, 00:15:37.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.710 "is_configured": false, 00:15:37.710 "data_offset": 256, 00:15:37.710 "data_size": 7936 00:15:37.710 }, 00:15:37.710 { 00:15:37.710 "name": "BaseBdev2", 00:15:37.710 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:37.710 "is_configured": true, 00:15:37.710 "data_offset": 256, 00:15:37.710 "data_size": 7936 00:15:37.710 } 00:15:37.710 ] 00:15:37.710 }' 00:15:37.710 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.710 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.969 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.228 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.228 "name": "raid_bdev1", 00:15:38.228 "uuid": "330cf332-4a2f-11ef-9c8e-7947904e2597", 00:15:38.228 "strip_size_kb": 0, 00:15:38.228 "state": "online", 00:15:38.229 "raid_level": "raid1", 00:15:38.229 "superblock": true, 00:15:38.229 "num_base_bdevs": 2, 00:15:38.229 "num_base_bdevs_discovered": 1, 00:15:38.229 "num_base_bdevs_operational": 1, 00:15:38.229 "base_bdevs_list": [ 00:15:38.229 { 00:15:38.229 "name": null, 00:15:38.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.229 "is_configured": false, 00:15:38.229 "data_offset": 256, 00:15:38.229 "data_size": 7936 00:15:38.229 }, 00:15:38.229 { 00:15:38.229 "name": "BaseBdev2", 00:15:38.229 "uuid": "71b9e1b5-302b-5354-b3bc-c4b5fc1e68f0", 00:15:38.229 "is_configured": true, 00:15:38.229 "data_offset": 256, 00:15:38.229 "data_size": 7936 00:15:38.229 } 00:15:38.229 ] 00:15:38.229 }' 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 66939 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66939 ']' 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66939 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66939 00:15:38.229 02:40:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:15:38.229 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:38.229 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:38.229 killing process with pid 66939 00:15:38.229 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66939' 00:15:38.229 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66939 00:15:38.229 Received shutdown signal, test time was about 60.000000 seconds 00:15:38.229 00:15:38.229 Latency(us) 00:15:38.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.229 =================================================================================================================== 00:15:38.229 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:38.229 [2024-07-25 02:40:25.005045] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.229 [2024-07-25 02:40:25.005072] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.229 [2024-07-25 02:40:25.005082] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.229 [2024-07-25 02:40:25.005085] bdev_raid.c: 379:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16d587035680 name raid_bdev1, state offline 00:15:38.229 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66939 00:15:38.229 [2024-07-25 02:40:25.019370] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.489 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:15:38.489 00:15:38.489 real 0m22.378s 00:15:38.489 user 0m32.803s 00:15:38.489 sys 0m2.629s 00:15:38.489 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.489 02:40:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:38.489 ************************************ 00:15:38.489 END TEST raid_rebuild_test_sb_md_interleaved 00:15:38.489 ************************************ 00:15:38.489 02:40:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:38.489 02:40:25 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:15:38.489 02:40:25 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:15:38.489 02:40:25 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 66939 ']' 00:15:38.489 02:40:25 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 66939 00:15:38.489 02:40:25 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:15:38.489 00:15:38.489 real 8m59.726s 00:15:38.489 user 15m10.745s 00:15:38.489 sys 1m40.334s 00:15:38.489 02:40:25 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.489 02:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.489 ************************************ 00:15:38.489 END TEST bdev_raid 00:15:38.489 ************************************ 00:15:38.489 02:40:25 -- common/autotest_common.sh@1142 -- # return 0 00:15:38.489 02:40:25 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:15:38.489 02:40:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:38.489 02:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.489 02:40:25 -- common/autotest_common.sh@10 -- # set +x 00:15:38.489 ************************************ 00:15:38.489 START TEST bdevperf_config 00:15:38.489 ************************************ 00:15:38.489 02:40:25 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:15:38.749 * Looking for test storage... 00:15:38.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:15:38.749 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:38.749 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:38.749 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:38.749 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:15:38.749 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:38.749 02:40:25 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:42.040 02:40:28 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 02:40:25.554950] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:42.040 [2024-07-25 02:40:25.555295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:42.040 Using job config with 4 jobs 00:15:42.040 EAL: TSC is not safe to use in SMP mode 00:15:42.040 EAL: TSC is not invariant 00:15:42.040 [2024-07-25 02:40:25.981615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.040 [2024-07-25 02:40:26.072931] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:42.040 [2024-07-25 02:40:26.074620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.040 cpumask for '\''job0'\'' is too big 00:15:42.040 cpumask for '\''job1'\'' is too big 00:15:42.040 cpumask for '\''job2'\'' is too big 00:15:42.040 cpumask for '\''job3'\'' is too big 00:15:42.040 Running I/O for 2 seconds... 00:15:42.040 00:15:42.040 Latency(us) 00:15:42.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412879.29 403.20 0.00 0.00 619.84 168.69 1249.54 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412893.53 403.22 0.00 0.00 619.71 153.51 1078.17 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412873.73 403.20 0.00 0.00 619.65 157.98 903.24 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412857.43 403.18 0.00 0.00 619.56 149.05 792.56 00:15:42.040 =================================================================================================================== 00:15:42.040 Total : 1651503.97 1612.80 0.00 0.00 619.69 149.05 1249.54' 00:15:42.040 02:40:28 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 02:40:25.554950] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:42.040 [2024-07-25 02:40:25.555295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:42.040 Using job config with 4 jobs 00:15:42.040 EAL: TSC is not safe to use in SMP mode 00:15:42.040 EAL: TSC is not invariant 00:15:42.040 [2024-07-25 02:40:25.981615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.040 [2024-07-25 02:40:26.072931] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:42.040 [2024-07-25 02:40:26.074620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.040 cpumask for '\''job0'\'' is too big 00:15:42.040 cpumask for '\''job1'\'' is too big 00:15:42.040 cpumask for '\''job2'\'' is too big 00:15:42.040 cpumask for '\''job3'\'' is too big 00:15:42.040 Running I/O for 2 seconds... 00:15:42.040 00:15:42.040 Latency(us) 00:15:42.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412879.29 403.20 0.00 0.00 619.84 168.69 1249.54 00:15:42.040 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.040 Malloc0 : 2.00 412893.53 403.22 0.00 0.00 619.71 153.51 1078.17 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412873.73 403.20 0.00 0.00 619.65 157.98 903.24 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412857.43 403.18 0.00 0.00 619.56 149.05 792.56 00:15:42.041 =================================================================================================================== 00:15:42.041 Total : 1651503.97 1612.80 0.00 0.00 619.69 149.05 1249.54' 00:15:42.041 02:40:28 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 02:40:25.554950] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:42.041 [2024-07-25 02:40:25.555295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:42.041 Using job config with 4 jobs 00:15:42.041 EAL: TSC is not safe to use in SMP mode 00:15:42.041 EAL: TSC is not invariant 00:15:42.041 [2024-07-25 02:40:25.981615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.041 [2024-07-25 02:40:26.072931] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:42.041 [2024-07-25 02:40:26.074620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.041 cpumask for '\''job0'\'' is too big 00:15:42.041 cpumask for '\''job1'\'' is too big 00:15:42.041 cpumask for '\''job2'\'' is too big 00:15:42.041 cpumask for '\''job3'\'' is too big 00:15:42.041 Running I/O for 2 seconds... 00:15:42.041 00:15:42.041 Latency(us) 00:15:42.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412879.29 403.20 0.00 0.00 619.84 168.69 1249.54 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412893.53 403.22 0.00 0.00 619.71 153.51 1078.17 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412873.73 403.20 0.00 0.00 619.65 157.98 903.24 00:15:42.041 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:42.041 Malloc0 : 2.00 412857.43 403.18 0.00 0.00 619.56 149.05 792.56 00:15:42.041 =================================================================================================================== 00:15:42.041 Total : 1651503.97 1612.80 0.00 0.00 619.69 149.05 1249.54' 00:15:42.041 02:40:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:42.041 02:40:28 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:42.041 02:40:28 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:15:42.041 02:40:28 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:42.041 [2024-07-25 02:40:28.310645] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:42.041 [2024-07-25 02:40:28.310997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:42.041 EAL: TSC is not safe to use in SMP mode 00:15:42.041 EAL: TSC is not invariant 00:15:42.041 [2024-07-25 02:40:28.734716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.041 [2024-07-25 02:40:28.813198] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:42.041 [2024-07-25 02:40:28.814885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.041 cpumask for 'job0' is too big 00:15:42.041 cpumask for 'job1' is too big 00:15:42.041 cpumask for 'job2' is too big 00:15:42.041 cpumask for 'job3' is too big 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:15:44.582 Running I/O for 2 seconds... 00:15:44.582 00:15:44.582 Latency(us) 00:15:44.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.582 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:44.582 Malloc0 : 2.00 412364.62 402.70 0.00 0.00 620.62 172.26 1213.84 00:15:44.582 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:44.582 Malloc0 : 2.00 412371.79 402.71 0.00 0.00 620.51 162.44 1042.47 00:15:44.582 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:44.582 Malloc0 : 2.00 412350.78 402.69 0.00 0.00 620.41 152.62 878.25 00:15:44.582 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:44.582 Malloc0 : 2.00 412333.68 402.67 0.00 0.00 620.32 154.41 771.14 00:15:44.582 =================================================================================================================== 00:15:44.582 Total : 1649420.87 1610.76 0.00 0.00 620.47 152.62 1213.84' 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:44.582 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:44.582 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:44.582 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:44.582 02:40:31 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 02:40:31.063179] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:47.123 [2024-07-25 02:40:31.063448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:47.123 Using job config with 3 jobs 00:15:47.123 EAL: TSC is not safe to use in SMP mode 00:15:47.123 EAL: TSC is not invariant 00:15:47.123 [2024-07-25 02:40:31.484034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.123 [2024-07-25 02:40:31.574874] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:47.123 [2024-07-25 02:40:31.576532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.123 cpumask for '\''job0'\'' is too big 00:15:47.123 cpumask for '\''job1'\'' is too big 00:15:47.123 cpumask for '\''job2'\'' is too big 00:15:47.123 Running I/O for 2 seconds... 00:15:47.123 00:15:47.123 Latency(us) 00:15:47.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514316.47 502.26 0.00 0.00 497.56 188.32 942.51 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514299.51 502.25 0.00 0.00 497.48 154.41 810.41 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514284.60 502.23 0.00 0.00 497.42 157.98 678.32 00:15:47.123 =================================================================================================================== 00:15:47.123 Total : 1542900.58 1506.74 0.00 0.00 497.49 154.41 942.51' 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 02:40:31.063179] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:47.123 [2024-07-25 02:40:31.063448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:47.123 Using job config with 3 jobs 00:15:47.123 EAL: TSC is not safe to use in SMP mode 00:15:47.123 EAL: TSC is not invariant 00:15:47.123 [2024-07-25 02:40:31.484034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.123 [2024-07-25 02:40:31.574874] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:47.123 [2024-07-25 02:40:31.576532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.123 cpumask for '\''job0'\'' is too big 00:15:47.123 cpumask for '\''job1'\'' is too big 00:15:47.123 cpumask for '\''job2'\'' is too big 00:15:47.123 Running I/O for 2 seconds... 00:15:47.123 00:15:47.123 Latency(us) 00:15:47.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514316.47 502.26 0.00 0.00 497.56 188.32 942.51 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514299.51 502.25 0.00 0.00 497.48 154.41 810.41 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514284.60 502.23 0.00 0.00 497.42 157.98 678.32 00:15:47.123 =================================================================================================================== 00:15:47.123 Total : 1542900.58 1506.74 0.00 0.00 497.49 154.41 942.51' 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 02:40:31.063179] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:47.123 [2024-07-25 02:40:31.063448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:47.123 Using job config with 3 jobs 00:15:47.123 EAL: TSC is not safe to use in SMP mode 00:15:47.123 EAL: TSC is not invariant 00:15:47.123 [2024-07-25 02:40:31.484034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.123 [2024-07-25 02:40:31.574874] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:47.123 [2024-07-25 02:40:31.576532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.123 cpumask for '\''job0'\'' is too big 00:15:47.123 cpumask for '\''job1'\'' is too big 00:15:47.123 cpumask for '\''job2'\'' is too big 00:15:47.123 Running I/O for 2 seconds... 00:15:47.123 00:15:47.123 Latency(us) 00:15:47.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514316.47 502.26 0.00 0.00 497.56 188.32 942.51 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514299.51 502.25 0.00 0.00 497.48 154.41 810.41 00:15:47.123 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:47.123 Malloc0 : 2.00 514284.60 502.23 0.00 0.00 497.42 157.98 678.32 00:15:47.123 =================================================================================================================== 00:15:47.123 Total : 1542900.58 1506.74 0.00 0.00 497.49 154.41 942.51' 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:15:47.123 02:40:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:15:47.124 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:47.124 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:47.124 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:47.124 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:15:47.124 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:15:47.124 02:40:33 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:50.418 02:40:36 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 02:40:33.841948] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:50.418 [2024-07-25 02:40:33.842230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:50.418 Using job config with 4 jobs 00:15:50.418 EAL: TSC is not safe to use in SMP mode 00:15:50.418 EAL: TSC is not invariant 00:15:50.418 [2024-07-25 02:40:34.273542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.418 [2024-07-25 02:40:34.366707] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:50.418 [2024-07-25 02:40:34.368425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.418 cpumask for '\''job0'\'' is too big 00:15:50.418 cpumask for '\''job1'\'' is too big 00:15:50.418 cpumask for '\''job2'\'' is too big 00:15:50.418 cpumask for '\''job3'\'' is too big 00:15:50.418 Running I/O for 2 seconds... 00:15:50.418 00:15:50.418 Latency(us) 00:15:50.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.418 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc0 : 2.00 189734.00 185.29 0.00 0.00 1349.04 403.42 2699.00 00:15:50.418 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc1 : 2.00 189726.70 185.28 0.00 0.00 1348.95 401.64 2670.44 00:15:50.418 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc0 : 2.00 189717.70 185.27 0.00 0.00 1348.55 390.93 2284.87 00:15:50.418 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc1 : 2.00 189709.08 185.26 0.00 0.00 1348.54 358.80 2299.15 00:15:50.418 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc0 : 2.00 189701.03 185.25 0.00 0.00 1348.18 401.64 1885.02 00:15:50.418 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.418 Malloc1 : 2.00 189768.68 185.32 0.00 0.00 1347.52 380.22 1856.46 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189759.31 185.31 0.00 0.00 1347.19 340.95 1692.23 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189750.59 185.30 0.00 0.00 1347.16 264.19 1692.23 00:15:50.419 =================================================================================================================== 00:15:50.419 Total : 1517867.08 1482.29 0.00 0.00 1348.14 264.19 2699.00' 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 02:40:33.841948] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:50.419 [2024-07-25 02:40:33.842230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:50.419 Using job config with 4 jobs 00:15:50.419 EAL: TSC is not safe to use in SMP mode 00:15:50.419 EAL: TSC is not invariant 00:15:50.419 [2024-07-25 02:40:34.273542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.419 [2024-07-25 02:40:34.366707] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:50.419 [2024-07-25 02:40:34.368425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.419 cpumask for '\''job0'\'' is too big 00:15:50.419 cpumask for '\''job1'\'' is too big 00:15:50.419 cpumask for '\''job2'\'' is too big 00:15:50.419 cpumask for '\''job3'\'' is too big 00:15:50.419 Running I/O for 2 seconds... 00:15:50.419 00:15:50.419 Latency(us) 00:15:50.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189734.00 185.29 0.00 0.00 1349.04 403.42 2699.00 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189726.70 185.28 0.00 0.00 1348.95 401.64 2670.44 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189717.70 185.27 0.00 0.00 1348.55 390.93 2284.87 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189709.08 185.26 0.00 0.00 1348.54 358.80 2299.15 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189701.03 185.25 0.00 0.00 1348.18 401.64 1885.02 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189768.68 185.32 0.00 0.00 1347.52 380.22 1856.46 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189759.31 185.31 0.00 0.00 1347.19 340.95 1692.23 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189750.59 185.30 0.00 0.00 1347.16 264.19 1692.23 00:15:50.419 =================================================================================================================== 00:15:50.419 Total : 1517867.08 1482.29 0.00 0.00 1348.14 264.19 2699.00' 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 02:40:33.841948] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:50.419 [2024-07-25 02:40:33.842230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:50.419 Using job config with 4 jobs 00:15:50.419 EAL: TSC is not safe to use in SMP mode 00:15:50.419 EAL: TSC is not invariant 00:15:50.419 [2024-07-25 02:40:34.273542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.419 [2024-07-25 02:40:34.366707] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:50.419 [2024-07-25 02:40:34.368425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.419 cpumask for '\''job0'\'' is too big 00:15:50.419 cpumask for '\''job1'\'' is too big 00:15:50.419 cpumask for '\''job2'\'' is too big 00:15:50.419 cpumask for '\''job3'\'' is too big 00:15:50.419 Running I/O for 2 seconds... 00:15:50.419 00:15:50.419 Latency(us) 00:15:50.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189734.00 185.29 0.00 0.00 1349.04 403.42 2699.00 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189726.70 185.28 0.00 0.00 1348.95 401.64 2670.44 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189717.70 185.27 0.00 0.00 1348.55 390.93 2284.87 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189709.08 185.26 0.00 0.00 1348.54 358.80 2299.15 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189701.03 185.25 0.00 0.00 1348.18 401.64 1885.02 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189768.68 185.32 0.00 0.00 1347.52 380.22 1856.46 00:15:50.419 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc0 : 2.00 189759.31 185.31 0.00 0.00 1347.19 340.95 1692.23 00:15:50.419 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:15:50.419 Malloc1 : 2.00 189750.59 185.30 0.00 0.00 1347.16 264.19 1692.23 00:15:50.419 =================================================================================================================== 00:15:50.419 Total : 1517867.08 1482.29 0.00 0.00 1348.14 264.19 2699.00' 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:15:50.419 02:40:36 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:50.419 00:15:50.419 real 0m11.265s 00:15:50.419 user 0m9.236s 00:15:50.419 sys 0m2.101s 00:15:50.419 02:40:36 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.419 02:40:36 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.419 ************************************ 00:15:50.419 END TEST bdevperf_config 00:15:50.419 ************************************ 00:15:50.419 02:40:36 -- common/autotest_common.sh@1142 -- # return 0 00:15:50.419 02:40:36 -- spdk/autotest.sh@192 -- # uname -s 00:15:50.419 02:40:36 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:15:50.419 02:40:36 -- spdk/autotest.sh@198 -- # uname -s 00:15:50.419 02:40:36 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:15:50.419 02:40:36 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:15:50.419 02:40:36 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:15:50.419 02:40:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.419 02:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.419 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:50.419 ************************************ 00:15:50.419 START TEST blockdev_nvme 00:15:50.419 ************************************ 00:15:50.419 02:40:36 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:15:50.419 * Looking for test storage... 00:15:50.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:50.419 02:40:36 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' FreeBSD = Linux ']' 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@678 -- # PRE_RESERVED_MEM=2048 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:15:50.419 02:40:36 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:15:50.420 02:40:36 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:50.420 02:40:36 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67663 00:15:50.420 02:40:36 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:50.420 02:40:36 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:50.420 02:40:36 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 67663 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 67663 ']' 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.420 02:40:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.420 [2024-07-25 02:40:36.907748] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:50.420 [2024-07-25 02:40:36.908106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:50.679 EAL: TSC is not safe to use in SMP mode 00:15:50.679 EAL: TSC is not invariant 00:15:50.679 [2024-07-25 02:40:37.335548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.679 [2024-07-25 02:40:37.428258] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:50.679 [2024-07-25 02:40:37.429906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.938 02:40:37 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.939 02:40:37 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:50.939 02:40:37 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:15:50.939 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.939 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 [2024-07-25 02:40:37.854173] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:51.199 02:40:37 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.199 02:40:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "46887d0b-4a2f-11ef-9c8e-7947904e2597"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "46887d0b-4a2f-11ef-9c8e-7947904e2597",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:51.199 02:40:38 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 67663 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 67663 ']' 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 67663 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 67663 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:15:51.199 killing process with pid 67663 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67663' 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 67663 00:15:51.199 02:40:38 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 67663 00:15:51.458 02:40:38 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:51.458 02:40:38 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:51.458 02:40:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:15:51.458 02:40:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.458 02:40:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 ************************************ 00:15:51.458 START TEST bdev_hello_world 00:15:51.458 ************************************ 00:15:51.458 02:40:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:51.458 [2024-07-25 02:40:38.305340] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:51.458 [2024-07-25 02:40:38.305677] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:52.027 EAL: TSC is not safe to use in SMP mode 00:15:52.027 EAL: TSC is not invariant 00:15:52.027 [2024-07-25 02:40:38.724365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.027 [2024-07-25 02:40:38.818585] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:52.027 [2024-07-25 02:40:38.820292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.027 [2024-07-25 02:40:38.876749] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:52.286 [2024-07-25 02:40:38.944892] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:52.286 [2024-07-25 02:40:38.944917] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:15:52.286 [2024-07-25 02:40:38.944941] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:52.286 [2024-07-25 02:40:38.945433] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:52.286 [2024-07-25 02:40:38.945687] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:52.286 [2024-07-25 02:40:38.945707] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:52.286 [2024-07-25 02:40:38.945890] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:52.286 00:15:52.286 [2024-07-25 02:40:38.945910] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:52.286 00:15:52.286 real 0m0.819s 00:15:52.286 user 0m0.368s 00:15:52.286 sys 0m0.448s 00:15:52.286 02:40:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.286 02:40:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:52.286 ************************************ 00:15:52.286 END TEST bdev_hello_world 00:15:52.286 ************************************ 00:15:52.286 02:40:39 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:15:52.286 02:40:39 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:52.286 02:40:39 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:52.287 02:40:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.287 02:40:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.287 ************************************ 00:15:52.287 START TEST bdev_bounds 00:15:52.287 ************************************ 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67734 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:52.287 Process bdevio pid: 67734 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67734' 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67734 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 67734 ']' 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.287 02:40:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:52.546 [2024-07-25 02:40:39.195006] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:52.547 [2024-07-25 02:40:39.195359] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:52.806 EAL: TSC is not safe to use in SMP mode 00:15:52.806 EAL: TSC is not invariant 00:15:52.806 [2024-07-25 02:40:39.631801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.065 [2024-07-25 02:40:39.723331] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:53.065 [2024-07-25 02:40:39.723357] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:15:53.065 [2024-07-25 02:40:39.723363] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:15:53.065 [2024-07-25 02:40:39.726154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.065 [2024-07-25 02:40:39.726054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.065 [2024-07-25 02:40:39.726154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.065 [2024-07-25 02:40:39.781524] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:53.324 I/O targets: 00:15:53.324 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:53.324 00:15:53.324 00:15:53.324 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.324 http://cunit.sourceforge.net/ 00:15:53.324 00:15:53.324 00:15:53.324 Suite: bdevio tests on: Nvme0n1 00:15:53.324 Test: blockdev write read block ...passed 00:15:53.324 Test: blockdev write zeroes read block ...passed 00:15:53.324 Test: blockdev write zeroes read no split ...passed 00:15:53.324 Test: blockdev write zeroes read split ...passed 00:15:53.324 Test: blockdev write zeroes read split partial ...passed 00:15:53.324 Test: blockdev reset ...[2024-07-25 02:40:40.194330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:15:53.324 [2024-07-25 02:40:40.195385] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:53.324 passed 00:15:53.324 Test: blockdev write read 8 blocks ...passed 00:15:53.324 Test: blockdev write read size > 128k ...passed 00:15:53.324 Test: blockdev write read invalid size ...passed 00:15:53.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.324 Test: blockdev write read max offset ...passed 00:15:53.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.324 Test: blockdev writev readv 8 blocks ...passed 00:15:53.324 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.324 Test: blockdev writev readv block ...passed 00:15:53.324 Test: blockdev writev readv size > 128k ...passed 00:15:53.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.324 Test: blockdev comparev and writev ...[2024-07-25 02:40:40.198265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x250008000 len:0x1000 00:15:53.324 [2024-07-25 02:40:40.198316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:53.324 passed 00:15:53.324 Test: blockdev nvme passthru rw ...passed 00:15:53.324 Test: blockdev nvme passthru vendor specific ...[2024-07-25 02:40:40.198634] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:53.324 [2024-07-25 02:40:40.198649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:53.324 passed 00:15:53.324 Test: blockdev nvme admin passthru ...passed 00:15:53.324 Test: blockdev copy ...passed 00:15:53.324 00:15:53.324 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.324 suites 1 1 n/a 0 0 00:15:53.324 tests 23 23 23 0 0 00:15:53.324 asserts 152 152 152 0 n/a 00:15:53.324 00:15:53.324 Elapsed time = 0.031 seconds 00:15:53.324 0 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67734 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 67734 ']' 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 67734 00:15:53.324 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 67734 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:15:53.584 killing process with pid 67734 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67734' 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 67734 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 67734 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:53.584 00:15:53.584 real 0m1.221s 00:15:53.584 user 0m2.272s 00:15:53.584 sys 0m0.564s 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.584 02:40:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:53.584 ************************************ 00:15:53.584 END TEST bdev_bounds 00:15:53.584 ************************************ 00:15:53.584 02:40:40 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:15:53.584 02:40:40 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:15:53.584 02:40:40 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:53.584 02:40:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.584 02:40:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.584 ************************************ 00:15:53.584 START TEST bdev_nbd 00:15:53.584 ************************************ 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]] 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0 00:15:53.584 00:15:53.584 real 0m0.007s 00:15:53.584 user 0m0.007s 00:15:53.584 sys 0m0.001s 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.584 02:40:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:53.584 ************************************ 00:15:53.584 END TEST bdev_nbd 00:15:53.584 ************************************ 00:15:53.844 02:40:40 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:15:53.844 02:40:40 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:53.844 02:40:40 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:15:53.844 skipping fio tests on NVMe due to multi-ns failures. 00:15:53.844 02:40:40 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:15:53.844 02:40:40 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.844 02:40:40 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.844 02:40:40 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:53.844 02:40:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.844 02:40:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.844 ************************************ 00:15:53.844 START TEST bdev_verify 00:15:53.844 ************************************ 00:15:53.844 02:40:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:53.844 [2024-07-25 02:40:40.562601] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:15:53.844 [2024-07-25 02:40:40.562856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:54.414 EAL: TSC is not safe to use in SMP mode 00:15:54.414 EAL: TSC is not invariant 00:15:54.673 [2024-07-25 02:40:41.321480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.673 [2024-07-25 02:40:41.413547] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:54.673 [2024-07-25 02:40:41.413574] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:15:54.673 [2024-07-25 02:40:41.415744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.673 [2024-07-25 02:40:41.415745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.673 [2024-07-25 02:40:41.471731] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:15:54.673 Running I/O for 5 seconds... 00:15:59.984 00:15:59.984 Latency(us) 00:15:59.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:59.984 Verification LBA range: start 0x0 length 0xa0000 00:15:59.984 Nvme0n1 : 5.00 25430.56 99.34 0.00 0.00 5020.52 319.52 10281.91 00:15:59.984 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.984 Verification LBA range: start 0xa0000 length 0xa0000 00:15:59.984 Nvme0n1 : 5.00 28624.07 111.81 0.00 0.00 4460.47 248.12 10567.52 00:15:59.984 =================================================================================================================== 00:15:59.984 Total : 54054.63 211.15 0.00 0.00 4723.97 248.12 10567.52 00:16:01.363 00:16:01.363 real 0m7.309s 00:16:01.363 user 0m12.778s 00:16:01.363 sys 0m0.797s 00:16:01.363 02:40:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.363 02:40:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.363 ************************************ 00:16:01.363 END TEST bdev_verify 00:16:01.363 ************************************ 00:16:01.363 02:40:47 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:01.363 02:40:47 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.363 02:40:47 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:16:01.364 02:40:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.364 02:40:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.364 ************************************ 00:16:01.364 START TEST bdev_verify_big_io 00:16:01.364 ************************************ 00:16:01.364 02:40:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.364 [2024-07-25 02:40:47.927371] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:01.364 [2024-07-25 02:40:47.927687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:01.624 EAL: TSC is not safe to use in SMP mode 00:16:01.624 EAL: TSC is not invariant 00:16:01.624 [2024-07-25 02:40:48.372769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.624 [2024-07-25 02:40:48.488011] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:01.624 [2024-07-25 02:40:48.488035] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:01.624 [2024-07-25 02:40:48.491098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.624 [2024-07-25 02:40:48.491064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.882 [2024-07-25 02:40:48.551510] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:01.882 Running I/O for 5 seconds... 00:16:07.140 00:16:07.140 Latency(us) 00:16:07.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.140 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:07.140 Verification LBA range: start 0x0 length 0xa000 00:16:07.140 Nvme0n1 : 5.00 9186.84 574.18 0.00 0.00 13852.76 228.49 27761.15 00:16:07.140 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:07.140 Verification LBA range: start 0xa000 length 0xa000 00:16:07.140 Nvme0n1 : 5.01 9192.12 574.51 0.00 0.00 13848.66 144.59 37928.82 00:16:07.140 =================================================================================================================== 00:16:07.140 Total : 18378.95 1148.68 0.00 0.00 13850.71 144.59 37928.82 00:16:12.415 00:16:12.415 real 0m10.731s 00:16:12.415 user 0m20.177s 00:16:12.415 sys 0m0.525s 00:16:12.415 02:40:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.415 02:40:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.415 ************************************ 00:16:12.415 END TEST bdev_verify_big_io 00:16:12.415 ************************************ 00:16:12.415 02:40:58 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:12.415 02:40:58 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.415 02:40:58 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:12.415 02:40:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.415 02:40:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.415 ************************************ 00:16:12.415 START TEST bdev_write_zeroes 00:16:12.415 ************************************ 00:16:12.415 02:40:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.415 [2024-07-25 02:40:58.728039] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:12.415 [2024-07-25 02:40:58.728378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:12.415 EAL: TSC is not safe to use in SMP mode 00:16:12.415 EAL: TSC is not invariant 00:16:12.415 [2024-07-25 02:40:59.169002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.415 [2024-07-25 02:40:59.285368] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:12.415 [2024-07-25 02:40:59.287712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.672 [2024-07-25 02:40:59.350130] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:12.672 Running I/O for 1 seconds... 00:16:13.605 00:16:13.605 Latency(us) 00:16:13.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.605 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:13.605 Nvme0n1 : 1.00 73516.48 287.17 0.00 0.00 1739.33 735.44 15422.86 00:16:13.605 =================================================================================================================== 00:16:13.605 Total : 73516.48 287.17 0.00 0.00 1739.33 735.44 15422.86 00:16:13.865 00:16:13.865 real 0m1.976s 00:16:13.865 user 0m1.484s 00:16:13.865 sys 0m0.488s 00:16:13.865 02:41:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.865 02:41:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:13.865 ************************************ 00:16:13.865 END TEST bdev_write_zeroes 00:16:13.865 ************************************ 00:16:13.865 02:41:00 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:13.865 02:41:00 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.865 02:41:00 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:13.865 02:41:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.865 02:41:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:13.865 ************************************ 00:16:13.865 START TEST bdev_json_nonenclosed 00:16:13.865 ************************************ 00:16:13.865 02:41:00 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:13.865 [2024-07-25 02:41:00.763543] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:13.865 [2024-07-25 02:41:00.763852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:14.436 EAL: TSC is not safe to use in SMP mode 00:16:14.436 EAL: TSC is not invariant 00:16:14.436 [2024-07-25 02:41:01.203937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.436 [2024-07-25 02:41:01.322590] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.436 [2024-07-25 02:41:01.324813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.436 [2024-07-25 02:41:01.324846] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:14.436 [2024-07-25 02:41:01.324853] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:14.436 [2024-07-25 02:41:01.324858] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:14.696 00:16:14.696 real 0m0.738s 00:16:14.696 user 0m0.247s 00:16:14.696 sys 0m0.488s 00:16:14.696 02:41:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:16:14.696 02:41:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.696 02:41:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:14.696 ************************************ 00:16:14.696 END TEST bdev_json_nonenclosed 00:16:14.696 ************************************ 00:16:14.696 02:41:01 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:16:14.696 02:41:01 blockdev_nvme -- bdev/blockdev.sh@781 -- # true 00:16:14.696 02:41:01 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.696 02:41:01 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:16:14.696 02:41:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.696 02:41:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.696 ************************************ 00:16:14.696 START TEST bdev_json_nonarray 00:16:14.696 ************************************ 00:16:14.696 02:41:01 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:14.696 [2024-07-25 02:41:01.565148] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:14.696 [2024-07-25 02:41:01.565486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:15.266 EAL: TSC is not safe to use in SMP mode 00:16:15.266 EAL: TSC is not invariant 00:16:15.266 [2024-07-25 02:41:02.014507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.266 [2024-07-25 02:41:02.131276] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:15.266 [2024-07-25 02:41:02.133567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.266 [2024-07-25 02:41:02.133601] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:15.266 [2024-07-25 02:41:02.133609] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:15.266 [2024-07-25 02:41:02.133615] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:15.526 00:16:15.526 real 0m0.746s 00:16:15.526 user 0m0.254s 00:16:15.526 sys 0m0.489s 00:16:15.526 02:41:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:16:15.526 02:41:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.526 02:41:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:15.526 ************************************ 00:16:15.526 END TEST bdev_json_nonarray 00:16:15.526 ************************************ 00:16:15.526 02:41:02 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:16:15.526 02:41:02 blockdev_nvme -- bdev/blockdev.sh@784 -- # true 00:16:15.526 02:41:02 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:16:15.526 02:41:02 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:16:15.527 02:41:02 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:16:15.527 00:16:15.527 real 0m25.680s 00:16:15.527 user 0m39.183s 00:16:15.527 sys 0m4.846s 00:16:15.527 02:41:02 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.527 02:41:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:15.527 ************************************ 00:16:15.527 END TEST blockdev_nvme 00:16:15.527 ************************************ 00:16:15.527 02:41:02 -- common/autotest_common.sh@1142 -- # return 0 00:16:15.527 02:41:02 -- spdk/autotest.sh@213 -- # uname -s 00:16:15.527 02:41:02 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:16:15.527 02:41:02 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:16:15.527 02:41:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:15.527 02:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.527 02:41:02 -- common/autotest_common.sh@10 -- # set +x 00:16:15.527 ************************************ 00:16:15.527 START TEST nvme 00:16:15.527 ************************************ 00:16:15.527 02:41:02 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:16:15.786 * Looking for test storage... 00:16:15.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:15.786 02:41:02 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:16.046 hw.nic_uio.bdfs="0:16:0" 00:16:16.046 02:41:02 nvme -- nvme/nvme.sh@79 -- # uname 00:16:16.046 02:41:02 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:16:16.047 02:41:02 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:16.047 02:41:02 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:16:16.047 02:41:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.047 02:41:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.047 ************************************ 00:16:16.047 START TEST nvme_reset 00:16:16.047 ************************************ 00:16:16.047 02:41:02 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:16.618 EAL: TSC is not safe to use in SMP mode 00:16:16.618 EAL: TSC is not invariant 00:16:16.618 [2024-07-25 02:41:03.329833] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:16.618 Initializing NVMe Controllers 00:16:16.618 Skipping QEMU NVMe SSD at 0000:00:10.0 00:16:16.618 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:16:16.618 00:16:16.618 real 0m0.528s 00:16:16.618 user 0m0.015s 00:16:16.618 sys 0m0.512s 00:16:16.618 02:41:03 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.618 02:41:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:16:16.618 ************************************ 00:16:16.618 END TEST nvme_reset 00:16:16.618 ************************************ 00:16:16.618 02:41:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:16.618 02:41:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:16:16.618 02:41:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:16.618 02:41:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.618 02:41:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.618 ************************************ 00:16:16.618 START TEST nvme_identify 00:16:16.618 ************************************ 00:16:16.618 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:16:16.618 02:41:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:16:16.618 02:41:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:16:16.619 02:41:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:16:16.619 02:41:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:16:16.619 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:16.619 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:16:16.619 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:16.619 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:16.619 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:16.879 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:16:16.879 02:41:03 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:16:16.879 02:41:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:16:17.138 EAL: TSC is not safe to use in SMP mode 00:16:17.138 EAL: TSC is not invariant 00:16:17.138 [2024-07-25 02:41:03.986601] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:17.138 ===================================================== 00:16:17.138 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:17.138 ===================================================== 00:16:17.138 Controller Capabilities/Features 00:16:17.138 ================================ 00:16:17.138 Vendor ID: 1b36 00:16:17.138 Subsystem Vendor ID: 1af4 00:16:17.138 Serial Number: 12340 00:16:17.138 Model Number: QEMU NVMe Ctrl 00:16:17.138 Firmware Version: 8.0.0 00:16:17.138 Recommended Arb Burst: 6 00:16:17.138 IEEE OUI Identifier: 00 54 52 00:16:17.138 Multi-path I/O 00:16:17.138 May have multiple subsystem ports: No 00:16:17.138 May have multiple controllers: No 00:16:17.138 Associated with SR-IOV VF: No 00:16:17.138 Max Data Transfer Size: 524288 00:16:17.138 Max Number of Namespaces: 256 00:16:17.138 Max Number of I/O Queues: 64 00:16:17.138 NVMe Specification Version (VS): 1.4 00:16:17.138 NVMe Specification Version (Identify): 1.4 00:16:17.138 Maximum Queue Entries: 2048 00:16:17.138 Contiguous Queues Required: Yes 00:16:17.138 Arbitration Mechanisms Supported 00:16:17.138 Weighted Round Robin: Not Supported 00:16:17.138 Vendor Specific: Not Supported 00:16:17.139 Reset Timeout: 7500 ms 00:16:17.139 Doorbell Stride: 4 bytes 00:16:17.139 NVM Subsystem Reset: Not Supported 00:16:17.139 Command Sets Supported 00:16:17.139 NVM Command Set: Supported 00:16:17.139 Boot Partition: Not Supported 00:16:17.139 Memory Page Size Minimum: 4096 bytes 00:16:17.139 Memory Page Size Maximum: 65536 bytes 00:16:17.139 Persistent Memory Region: Not Supported 00:16:17.139 Optional Asynchronous Events Supported 00:16:17.139 Namespace Attribute Notices: Supported 00:16:17.139 Firmware Activation Notices: Not Supported 00:16:17.139 ANA Change Notices: Not Supported 00:16:17.139 PLE Aggregate Log Change Notices: Not Supported 00:16:17.139 LBA Status Info Alert Notices: Not Supported 00:16:17.139 EGE Aggregate Log Change Notices: Not Supported 00:16:17.139 Normal NVM Subsystem Shutdown event: Not Supported 00:16:17.139 Zone Descriptor Change Notices: Not Supported 00:16:17.139 Discovery Log Change Notices: Not Supported 00:16:17.139 Controller Attributes 00:16:17.139 128-bit Host Identifier: Not Supported 00:16:17.139 Non-Operational Permissive Mode: Not Supported 00:16:17.139 NVM Sets: Not Supported 00:16:17.139 Read Recovery Levels: Not Supported 00:16:17.139 Endurance Groups: Not Supported 00:16:17.139 Predictable Latency Mode: Not Supported 00:16:17.139 Traffic Based Keep ALive: Not Supported 00:16:17.139 Namespace Granularity: Not Supported 00:16:17.139 SQ Associations: Not Supported 00:16:17.139 UUID List: Not Supported 00:16:17.139 Multi-Domain Subsystem: Not Supported 00:16:17.139 Fixed Capacity Management: Not Supported 00:16:17.139 Variable Capacity Management: Not Supported 00:16:17.139 Delete Endurance Group: Not Supported 00:16:17.139 Delete NVM Set: Not Supported 00:16:17.139 Extended LBA Formats Supported: Supported 00:16:17.139 Flexible Data Placement Supported: Not Supported 00:16:17.139 00:16:17.139 Controller Memory Buffer Support 00:16:17.139 ================================ 00:16:17.139 Supported: No 00:16:17.139 00:16:17.139 Persistent Memory Region Support 00:16:17.139 ================================ 00:16:17.139 Supported: No 00:16:17.139 00:16:17.139 Admin Command Set Attributes 00:16:17.139 ============================ 00:16:17.139 Security Send/Receive: Not Supported 00:16:17.139 Format NVM: Supported 00:16:17.139 Firmware Activate/Download: Not Supported 00:16:17.139 Namespace Management: Supported 00:16:17.139 Device Self-Test: Not Supported 00:16:17.139 Directives: Supported 00:16:17.139 NVMe-MI: Not Supported 00:16:17.139 Virtualization Management: Not Supported 00:16:17.139 Doorbell Buffer Config: Supported 00:16:17.139 Get LBA Status Capability: Not Supported 00:16:17.139 Command & Feature Lockdown Capability: Not Supported 00:16:17.139 Abort Command Limit: 4 00:16:17.139 Async Event Request Limit: 4 00:16:17.139 Number of Firmware Slots: N/A 00:16:17.139 Firmware Slot 1 Read-Only: N/A 00:16:17.139 Firmware Activation Without Reset: N/A 00:16:17.139 Multiple Update Detection Support: N/A 00:16:17.139 Firmware Update Granularity: No Information Provided 00:16:17.139 Per-Namespace SMART Log: Yes 00:16:17.139 Asymmetric Namespace Access Log Page: Not Supported 00:16:17.139 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:16:17.139 Command Effects Log Page: Supported 00:16:17.139 Get Log Page Extended Data: Supported 00:16:17.139 Telemetry Log Pages: Not Supported 00:16:17.139 Persistent Event Log Pages: Not Supported 00:16:17.139 Supported Log Pages Log Page: May Support 00:16:17.139 Commands Supported & Effects Log Page: Not Supported 00:16:17.139 Feature Identifiers & Effects Log Page:May Support 00:16:17.139 NVMe-MI Commands & Effects Log Page: May Support 00:16:17.139 Data Area 4 for Telemetry Log: Not Supported 00:16:17.139 Error Log Page Entries Supported: 1 00:16:17.139 Keep Alive: Not Supported 00:16:17.139 00:16:17.139 NVM Command Set Attributes 00:16:17.139 ========================== 00:16:17.139 Submission Queue Entry Size 00:16:17.139 Max: 64 00:16:17.139 Min: 64 00:16:17.139 Completion Queue Entry Size 00:16:17.139 Max: 16 00:16:17.139 Min: 16 00:16:17.139 Number of Namespaces: 256 00:16:17.139 Compare Command: Supported 00:16:17.139 Write Uncorrectable Command: Not Supported 00:16:17.139 Dataset Management Command: Supported 00:16:17.139 Write Zeroes Command: Supported 00:16:17.139 Set Features Save Field: Supported 00:16:17.139 Reservations: Not Supported 00:16:17.139 Timestamp: Supported 00:16:17.139 Copy: Supported 00:16:17.139 Volatile Write Cache: Present 00:16:17.139 Atomic Write Unit (Normal): 1 00:16:17.139 Atomic Write Unit (PFail): 1 00:16:17.139 Atomic Compare & Write Unit: 1 00:16:17.139 Fused Compare & Write: Not Supported 00:16:17.139 Scatter-Gather List 00:16:17.139 SGL Command Set: Supported 00:16:17.139 SGL Keyed: Not Supported 00:16:17.139 SGL Bit Bucket Descriptor: Not Supported 00:16:17.139 SGL Metadata Pointer: Not Supported 00:16:17.139 Oversized SGL: Not Supported 00:16:17.139 SGL Metadata Address: Not Supported 00:16:17.139 SGL Offset: Not Supported 00:16:17.139 Transport SGL Data Block: Not Supported 00:16:17.139 Replay Protected Memory Block: Not Supported 00:16:17.139 00:16:17.139 Firmware Slot Information 00:16:17.139 ========================= 00:16:17.139 Active slot: 1 00:16:17.139 Slot 1 Firmware Revision: 1.0 00:16:17.139 00:16:17.139 00:16:17.139 Commands Supported and Effects 00:16:17.139 ============================== 00:16:17.139 Admin Commands 00:16:17.139 -------------- 00:16:17.139 Delete I/O Submission Queue (00h): Supported 00:16:17.139 Create I/O Submission Queue (01h): Supported 00:16:17.139 Get Log Page (02h): Supported 00:16:17.139 Delete I/O Completion Queue (04h): Supported 00:16:17.139 Create I/O Completion Queue (05h): Supported 00:16:17.139 Identify (06h): Supported 00:16:17.139 Abort (08h): Supported 00:16:17.139 Set Features (09h): Supported 00:16:17.139 Get Features (0Ah): Supported 00:16:17.139 Asynchronous Event Request (0Ch): Supported 00:16:17.139 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:17.139 Directive Send (19h): Supported 00:16:17.139 Directive Receive (1Ah): Supported 00:16:17.139 Virtualization Management (1Ch): Supported 00:16:17.139 Doorbell Buffer Config (7Ch): Supported 00:16:17.139 Format NVM (80h): Supported LBA-Change 00:16:17.139 I/O Commands 00:16:17.139 ------------ 00:16:17.139 Flush (00h): Supported LBA-Change 00:16:17.139 Write (01h): Supported LBA-Change 00:16:17.139 Read (02h): Supported 00:16:17.139 Compare (05h): Supported 00:16:17.139 Write Zeroes (08h): Supported LBA-Change 00:16:17.139 Dataset Management (09h): Supported LBA-Change 00:16:17.139 Unknown (0Ch): Supported 00:16:17.139 Unknown (12h): Supported 00:16:17.139 Copy (19h): Supported LBA-Change 00:16:17.139 Unknown (1Dh): Supported LBA-Change 00:16:17.139 00:16:17.139 Error Log 00:16:17.139 ========= 00:16:17.139 00:16:17.139 Arbitration 00:16:17.139 =========== 00:16:17.139 Arbitration Burst: no limit 00:16:17.139 00:16:17.139 Power Management 00:16:17.139 ================ 00:16:17.139 Number of Power States: 1 00:16:17.139 Current Power State: Power State #0 00:16:17.139 Power State #0: 00:16:17.139 Max Power: 25.00 W 00:16:17.139 Non-Operational State: Operational 00:16:17.139 Entry Latency: 16 microseconds 00:16:17.139 Exit Latency: 4 microseconds 00:16:17.139 Relative Read Throughput: 0 00:16:17.139 Relative Read Latency: 0 00:16:17.139 Relative Write Throughput: 0 00:16:17.139 Relative Write Latency: 0 00:16:17.399 Idle Power: Not Reported 00:16:17.399 Active Power: Not Reported 00:16:17.399 Non-Operational Permissive Mode: Not Supported 00:16:17.399 00:16:17.399 Health Information 00:16:17.399 ================== 00:16:17.399 Critical Warnings: 00:16:17.399 Available Spare Space: OK 00:16:17.399 Temperature: OK 00:16:17.399 Device Reliability: OK 00:16:17.399 Read Only: No 00:16:17.399 Volatile Memory Backup: OK 00:16:17.399 Current Temperature: 323 Kelvin (50 Celsius) 00:16:17.399 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:17.399 Available Spare: 0% 00:16:17.399 Available Spare Threshold: 0% 00:16:17.399 Life Percentage Used: 0% 00:16:17.399 Data Units Read: 13959 00:16:17.399 Data Units Written: 13943 00:16:17.399 Host Read Commands: 362670 00:16:17.399 Host Write Commands: 362519 00:16:17.399 Controller Busy Time: 0 minutes 00:16:17.399 Power Cycles: 0 00:16:17.399 Power On Hours: 0 hours 00:16:17.399 Unsafe Shutdowns: 0 00:16:17.399 Unrecoverable Media Errors: 0 00:16:17.399 Lifetime Error Log Entries: 0 00:16:17.399 Warning Temperature Time: 0 minutes 00:16:17.399 Critical Temperature Time: 0 minutes 00:16:17.399 00:16:17.399 Number of Queues 00:16:17.399 ================ 00:16:17.399 Number of I/O Submission Queues: 64 00:16:17.399 Number of I/O Completion Queues: 64 00:16:17.399 00:16:17.399 ZNS Specific Controller Data 00:16:17.399 ============================ 00:16:17.399 Zone Append Size Limit: 0 00:16:17.399 00:16:17.399 00:16:17.399 Active Namespaces 00:16:17.399 ================= 00:16:17.399 Namespace ID:1 00:16:17.399 Error Recovery Timeout: Unlimited 00:16:17.399 Command Set Identifier: NVM (00h) 00:16:17.399 Deallocate: Supported 00:16:17.399 Deallocated/Unwritten Error: Supported 00:16:17.399 Deallocated Read Value: All 0x00 00:16:17.399 Deallocate in Write Zeroes: Not Supported 00:16:17.399 Deallocated Guard Field: 0xFFFF 00:16:17.399 Flush: Supported 00:16:17.399 Reservation: Not Supported 00:16:17.399 Namespace Sharing Capabilities: Private 00:16:17.399 Size (in LBAs): 1310720 (5GiB) 00:16:17.399 Capacity (in LBAs): 1310720 (5GiB) 00:16:17.399 Utilization (in LBAs): 1310720 (5GiB) 00:16:17.399 Thin Provisioning: Not Supported 00:16:17.399 Per-NS Atomic Units: No 00:16:17.399 Maximum Single Source Range Length: 128 00:16:17.399 Maximum Copy Length: 128 00:16:17.399 Maximum Source Range Count: 128 00:16:17.399 NGUID/EUI64 Never Reused: No 00:16:17.399 Namespace Write Protected: No 00:16:17.399 Number of LBA Formats: 8 00:16:17.400 Current LBA Format: LBA Format #04 00:16:17.400 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:17.400 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:17.400 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:17.400 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:17.400 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:17.400 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:17.400 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:17.400 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:17.400 00:16:17.400 NVM Specific Namespace Data 00:16:17.400 =========================== 00:16:17.400 Logical Block Storage Tag Mask: 0 00:16:17.400 Protection Information Capabilities: 00:16:17.400 16b Guard Protection Information Storage Tag Support: No 00:16:17.400 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:17.400 Storage Tag Check Read Support: No 00:16:17.400 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.400 02:41:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:17.400 02:41:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:16:17.660 EAL: TSC is not safe to use in SMP mode 00:16:17.660 EAL: TSC is not invariant 00:16:17.660 [2024-07-25 02:41:04.487899] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:17.660 ===================================================== 00:16:17.660 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:17.660 ===================================================== 00:16:17.660 Controller Capabilities/Features 00:16:17.660 ================================ 00:16:17.660 Vendor ID: 1b36 00:16:17.660 Subsystem Vendor ID: 1af4 00:16:17.660 Serial Number: 12340 00:16:17.660 Model Number: QEMU NVMe Ctrl 00:16:17.660 Firmware Version: 8.0.0 00:16:17.660 Recommended Arb Burst: 6 00:16:17.660 IEEE OUI Identifier: 00 54 52 00:16:17.660 Multi-path I/O 00:16:17.661 May have multiple subsystem ports: No 00:16:17.661 May have multiple controllers: No 00:16:17.661 Associated with SR-IOV VF: No 00:16:17.661 Max Data Transfer Size: 524288 00:16:17.661 Max Number of Namespaces: 256 00:16:17.661 Max Number of I/O Queues: 64 00:16:17.661 NVMe Specification Version (VS): 1.4 00:16:17.661 NVMe Specification Version (Identify): 1.4 00:16:17.661 Maximum Queue Entries: 2048 00:16:17.661 Contiguous Queues Required: Yes 00:16:17.661 Arbitration Mechanisms Supported 00:16:17.661 Weighted Round Robin: Not Supported 00:16:17.661 Vendor Specific: Not Supported 00:16:17.661 Reset Timeout: 7500 ms 00:16:17.661 Doorbell Stride: 4 bytes 00:16:17.661 NVM Subsystem Reset: Not Supported 00:16:17.661 Command Sets Supported 00:16:17.661 NVM Command Set: Supported 00:16:17.661 Boot Partition: Not Supported 00:16:17.661 Memory Page Size Minimum: 4096 bytes 00:16:17.661 Memory Page Size Maximum: 65536 bytes 00:16:17.661 Persistent Memory Region: Not Supported 00:16:17.661 Optional Asynchronous Events Supported 00:16:17.661 Namespace Attribute Notices: Supported 00:16:17.661 Firmware Activation Notices: Not Supported 00:16:17.661 ANA Change Notices: Not Supported 00:16:17.661 PLE Aggregate Log Change Notices: Not Supported 00:16:17.661 LBA Status Info Alert Notices: Not Supported 00:16:17.661 EGE Aggregate Log Change Notices: Not Supported 00:16:17.661 Normal NVM Subsystem Shutdown event: Not Supported 00:16:17.661 Zone Descriptor Change Notices: Not Supported 00:16:17.661 Discovery Log Change Notices: Not Supported 00:16:17.661 Controller Attributes 00:16:17.661 128-bit Host Identifier: Not Supported 00:16:17.661 Non-Operational Permissive Mode: Not Supported 00:16:17.661 NVM Sets: Not Supported 00:16:17.661 Read Recovery Levels: Not Supported 00:16:17.661 Endurance Groups: Not Supported 00:16:17.661 Predictable Latency Mode: Not Supported 00:16:17.661 Traffic Based Keep ALive: Not Supported 00:16:17.661 Namespace Granularity: Not Supported 00:16:17.661 SQ Associations: Not Supported 00:16:17.661 UUID List: Not Supported 00:16:17.661 Multi-Domain Subsystem: Not Supported 00:16:17.661 Fixed Capacity Management: Not Supported 00:16:17.661 Variable Capacity Management: Not Supported 00:16:17.661 Delete Endurance Group: Not Supported 00:16:17.661 Delete NVM Set: Not Supported 00:16:17.661 Extended LBA Formats Supported: Supported 00:16:17.661 Flexible Data Placement Supported: Not Supported 00:16:17.661 00:16:17.661 Controller Memory Buffer Support 00:16:17.661 ================================ 00:16:17.661 Supported: No 00:16:17.661 00:16:17.661 Persistent Memory Region Support 00:16:17.661 ================================ 00:16:17.661 Supported: No 00:16:17.661 00:16:17.661 Admin Command Set Attributes 00:16:17.661 ============================ 00:16:17.661 Security Send/Receive: Not Supported 00:16:17.661 Format NVM: Supported 00:16:17.661 Firmware Activate/Download: Not Supported 00:16:17.661 Namespace Management: Supported 00:16:17.661 Device Self-Test: Not Supported 00:16:17.661 Directives: Supported 00:16:17.661 NVMe-MI: Not Supported 00:16:17.661 Virtualization Management: Not Supported 00:16:17.661 Doorbell Buffer Config: Supported 00:16:17.661 Get LBA Status Capability: Not Supported 00:16:17.661 Command & Feature Lockdown Capability: Not Supported 00:16:17.661 Abort Command Limit: 4 00:16:17.661 Async Event Request Limit: 4 00:16:17.661 Number of Firmware Slots: N/A 00:16:17.661 Firmware Slot 1 Read-Only: N/A 00:16:17.661 Firmware Activation Without Reset: N/A 00:16:17.661 Multiple Update Detection Support: N/A 00:16:17.661 Firmware Update Granularity: No Information Provided 00:16:17.661 Per-Namespace SMART Log: Yes 00:16:17.661 Asymmetric Namespace Access Log Page: Not Supported 00:16:17.661 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:16:17.661 Command Effects Log Page: Supported 00:16:17.661 Get Log Page Extended Data: Supported 00:16:17.661 Telemetry Log Pages: Not Supported 00:16:17.661 Persistent Event Log Pages: Not Supported 00:16:17.661 Supported Log Pages Log Page: May Support 00:16:17.661 Commands Supported & Effects Log Page: Not Supported 00:16:17.661 Feature Identifiers & Effects Log Page:May Support 00:16:17.661 NVMe-MI Commands & Effects Log Page: May Support 00:16:17.661 Data Area 4 for Telemetry Log: Not Supported 00:16:17.661 Error Log Page Entries Supported: 1 00:16:17.661 Keep Alive: Not Supported 00:16:17.661 00:16:17.661 NVM Command Set Attributes 00:16:17.661 ========================== 00:16:17.661 Submission Queue Entry Size 00:16:17.661 Max: 64 00:16:17.661 Min: 64 00:16:17.661 Completion Queue Entry Size 00:16:17.661 Max: 16 00:16:17.661 Min: 16 00:16:17.661 Number of Namespaces: 256 00:16:17.661 Compare Command: Supported 00:16:17.661 Write Uncorrectable Command: Not Supported 00:16:17.661 Dataset Management Command: Supported 00:16:17.661 Write Zeroes Command: Supported 00:16:17.661 Set Features Save Field: Supported 00:16:17.661 Reservations: Not Supported 00:16:17.661 Timestamp: Supported 00:16:17.661 Copy: Supported 00:16:17.661 Volatile Write Cache: Present 00:16:17.661 Atomic Write Unit (Normal): 1 00:16:17.661 Atomic Write Unit (PFail): 1 00:16:17.661 Atomic Compare & Write Unit: 1 00:16:17.661 Fused Compare & Write: Not Supported 00:16:17.661 Scatter-Gather List 00:16:17.661 SGL Command Set: Supported 00:16:17.661 SGL Keyed: Not Supported 00:16:17.661 SGL Bit Bucket Descriptor: Not Supported 00:16:17.661 SGL Metadata Pointer: Not Supported 00:16:17.661 Oversized SGL: Not Supported 00:16:17.661 SGL Metadata Address: Not Supported 00:16:17.661 SGL Offset: Not Supported 00:16:17.661 Transport SGL Data Block: Not Supported 00:16:17.661 Replay Protected Memory Block: Not Supported 00:16:17.661 00:16:17.661 Firmware Slot Information 00:16:17.661 ========================= 00:16:17.661 Active slot: 1 00:16:17.661 Slot 1 Firmware Revision: 1.0 00:16:17.661 00:16:17.661 00:16:17.661 Commands Supported and Effects 00:16:17.661 ============================== 00:16:17.661 Admin Commands 00:16:17.661 -------------- 00:16:17.661 Delete I/O Submission Queue (00h): Supported 00:16:17.661 Create I/O Submission Queue (01h): Supported 00:16:17.661 Get Log Page (02h): Supported 00:16:17.661 Delete I/O Completion Queue (04h): Supported 00:16:17.661 Create I/O Completion Queue (05h): Supported 00:16:17.661 Identify (06h): Supported 00:16:17.661 Abort (08h): Supported 00:16:17.661 Set Features (09h): Supported 00:16:17.661 Get Features (0Ah): Supported 00:16:17.661 Asynchronous Event Request (0Ch): Supported 00:16:17.661 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:17.661 Directive Send (19h): Supported 00:16:17.661 Directive Receive (1Ah): Supported 00:16:17.661 Virtualization Management (1Ch): Supported 00:16:17.661 Doorbell Buffer Config (7Ch): Supported 00:16:17.661 Format NVM (80h): Supported LBA-Change 00:16:17.661 I/O Commands 00:16:17.661 ------------ 00:16:17.661 Flush (00h): Supported LBA-Change 00:16:17.661 Write (01h): Supported LBA-Change 00:16:17.661 Read (02h): Supported 00:16:17.661 Compare (05h): Supported 00:16:17.661 Write Zeroes (08h): Supported LBA-Change 00:16:17.661 Dataset Management (09h): Supported LBA-Change 00:16:17.661 Unknown (0Ch): Supported 00:16:17.661 Unknown (12h): Supported 00:16:17.661 Copy (19h): Supported LBA-Change 00:16:17.661 Unknown (1Dh): Supported LBA-Change 00:16:17.661 00:16:17.661 Error Log 00:16:17.661 ========= 00:16:17.661 00:16:17.661 Arbitration 00:16:17.661 =========== 00:16:17.661 Arbitration Burst: no limit 00:16:17.661 00:16:17.661 Power Management 00:16:17.661 ================ 00:16:17.661 Number of Power States: 1 00:16:17.661 Current Power State: Power State #0 00:16:17.662 Power State #0: 00:16:17.662 Max Power: 25.00 W 00:16:17.662 Non-Operational State: Operational 00:16:17.662 Entry Latency: 16 microseconds 00:16:17.662 Exit Latency: 4 microseconds 00:16:17.662 Relative Read Throughput: 0 00:16:17.662 Relative Read Latency: 0 00:16:17.662 Relative Write Throughput: 0 00:16:17.662 Relative Write Latency: 0 00:16:17.662 Idle Power: Not Reported 00:16:17.662 Active Power: Not Reported 00:16:17.662 Non-Operational Permissive Mode: Not Supported 00:16:17.662 00:16:17.662 Health Information 00:16:17.662 ================== 00:16:17.662 Critical Warnings: 00:16:17.662 Available Spare Space: OK 00:16:17.662 Temperature: OK 00:16:17.662 Device Reliability: OK 00:16:17.662 Read Only: No 00:16:17.662 Volatile Memory Backup: OK 00:16:17.662 Current Temperature: 323 Kelvin (50 Celsius) 00:16:17.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:17.662 Available Spare: 0% 00:16:17.662 Available Spare Threshold: 0% 00:16:17.662 Life Percentage Used: 0% 00:16:17.662 Data Units Read: 13959 00:16:17.662 Data Units Written: 13943 00:16:17.662 Host Read Commands: 362670 00:16:17.662 Host Write Commands: 362519 00:16:17.662 Controller Busy Time: 0 minutes 00:16:17.662 Power Cycles: 0 00:16:17.662 Power On Hours: 0 hours 00:16:17.662 Unsafe Shutdowns: 0 00:16:17.662 Unrecoverable Media Errors: 0 00:16:17.662 Lifetime Error Log Entries: 0 00:16:17.662 Warning Temperature Time: 0 minutes 00:16:17.662 Critical Temperature Time: 0 minutes 00:16:17.662 00:16:17.662 Number of Queues 00:16:17.662 ================ 00:16:17.662 Number of I/O Submission Queues: 64 00:16:17.662 Number of I/O Completion Queues: 64 00:16:17.662 00:16:17.662 ZNS Specific Controller Data 00:16:17.662 ============================ 00:16:17.662 Zone Append Size Limit: 0 00:16:17.662 00:16:17.662 00:16:17.662 Active Namespaces 00:16:17.662 ================= 00:16:17.662 Namespace ID:1 00:16:17.662 Error Recovery Timeout: Unlimited 00:16:17.662 Command Set Identifier: NVM (00h) 00:16:17.662 Deallocate: Supported 00:16:17.662 Deallocated/Unwritten Error: Supported 00:16:17.662 Deallocated Read Value: All 0x00 00:16:17.662 Deallocate in Write Zeroes: Not Supported 00:16:17.662 Deallocated Guard Field: 0xFFFF 00:16:17.662 Flush: Supported 00:16:17.662 Reservation: Not Supported 00:16:17.662 Namespace Sharing Capabilities: Private 00:16:17.662 Size (in LBAs): 1310720 (5GiB) 00:16:17.662 Capacity (in LBAs): 1310720 (5GiB) 00:16:17.662 Utilization (in LBAs): 1310720 (5GiB) 00:16:17.662 Thin Provisioning: Not Supported 00:16:17.662 Per-NS Atomic Units: No 00:16:17.662 Maximum Single Source Range Length: 128 00:16:17.662 Maximum Copy Length: 128 00:16:17.662 Maximum Source Range Count: 128 00:16:17.662 NGUID/EUI64 Never Reused: No 00:16:17.662 Namespace Write Protected: No 00:16:17.662 Number of LBA Formats: 8 00:16:17.662 Current LBA Format: LBA Format #04 00:16:17.662 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:17.662 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:17.662 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:17.662 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:17.662 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:17.662 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:17.662 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:17.662 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:17.662 00:16:17.662 NVM Specific Namespace Data 00:16:17.662 =========================== 00:16:17.662 Logical Block Storage Tag Mask: 0 00:16:17.662 Protection Information Capabilities: 00:16:17.662 16b Guard Protection Information Storage Tag Support: No 00:16:17.662 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:17.662 Storage Tag Check Read Support: No 00:16:17.662 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:17.662 00:16:17.662 real 0m1.100s 00:16:17.662 user 0m0.053s 00:16:17.662 sys 0m1.071s 00:16:17.662 02:41:04 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.662 02:41:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:16:17.662 ************************************ 00:16:17.662 END TEST nvme_identify 00:16:17.662 ************************************ 00:16:17.922 02:41:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:17.922 02:41:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:16:17.922 02:41:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:17.922 02:41:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.922 02:41:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.922 ************************************ 00:16:17.922 START TEST nvme_perf 00:16:17.922 ************************************ 00:16:17.922 02:41:04 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:16:17.922 02:41:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:16:18.182 EAL: TSC is not safe to use in SMP mode 00:16:18.182 EAL: TSC is not invariant 00:16:18.182 [2024-07-25 02:41:05.064651] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:19.563 Initializing NVMe Controllers 00:16:19.563 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:19.563 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:19.563 Initialization complete. Launching workers. 00:16:19.563 ======================================================== 00:16:19.563 Latency(us) 00:16:19.563 Device Information : IOPS MiB/s Average min max 00:16:19.563 PCIE (0000:00:10.0) NSID 1 from core 0: 102787.96 1204.55 1245.45 171.52 4177.99 00:16:19.563 ======================================================== 00:16:19.563 Total : 102787.96 1204.55 1245.45 171.52 4177.99 00:16:19.563 00:16:19.563 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:19.563 ================================================================================= 00:16:19.563 1.00000% : 1021.051us 00:16:19.563 10.00000% : 1071.032us 00:16:19.563 25.00000% : 1106.733us 00:16:19.563 50.00000% : 1156.715us 00:16:19.563 75.00000% : 1285.239us 00:16:19.563 90.00000% : 1492.305us 00:16:19.563 95.00000% : 1785.054us 00:16:19.563 98.00000% : 2213.466us 00:16:19.563 99.00000% : 2399.112us 00:16:19.563 99.50000% : 2756.123us 00:16:19.563 99.90000% : 3227.377us 00:16:19.563 99.99000% : 4084.203us 00:16:19.563 99.99900% : 4169.885us 00:16:19.563 99.99990% : 4198.446us 00:16:19.563 99.99999% : 4198.446us 00:16:19.563 00:16:19.563 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:19.563 ============================================================================== 00:16:19.563 Range in us Cumulative IO count 00:16:19.563 171.365 - 172.258: 0.0010% ( 1) 00:16:19.563 172.258 - 173.150: 0.0019% ( 1) 00:16:19.563 173.150 - 174.043: 0.0029% ( 1) 00:16:19.563 174.935 - 175.828: 0.0039% ( 1) 00:16:19.563 176.720 - 177.613: 0.0049% ( 1) 00:16:19.563 177.613 - 178.505: 0.0058% ( 1) 00:16:19.563 179.398 - 180.290: 0.0068% ( 1) 00:16:19.563 186.538 - 187.431: 0.0078% ( 1) 00:16:19.563 187.431 - 188.323: 0.0097% ( 2) 00:16:19.563 189.216 - 190.108: 0.0107% ( 1) 00:16:19.563 191.001 - 191.893: 0.0117% ( 1) 00:16:19.563 191.893 - 192.786: 0.0126% ( 1) 00:16:19.563 193.678 - 194.571: 0.0136% ( 1) 00:16:19.563 200.819 - 201.711: 0.0146% ( 1) 00:16:19.563 201.711 - 202.604: 0.0156% ( 1) 00:16:19.563 203.496 - 204.389: 0.0165% ( 1) 00:16:19.563 207.066 - 207.959: 0.0175% ( 1) 00:16:19.563 208.851 - 209.744: 0.0185% ( 1) 00:16:19.563 224.024 - 224.917: 0.0194% ( 1) 00:16:19.563 224.917 - 225.809: 0.0204% ( 1) 00:16:19.563 228.487 - 230.272: 0.0214% ( 1) 00:16:19.563 248.122 - 249.907: 0.0224% ( 1) 00:16:19.563 376.646 - 378.431: 0.0233% ( 1) 00:16:19.563 378.431 - 380.216: 0.0243% ( 1) 00:16:19.563 642.619 - 646.189: 0.0272% ( 3) 00:16:19.563 646.189 - 649.759: 0.0301% ( 3) 00:16:19.563 649.759 - 653.330: 0.0331% ( 3) 00:16:19.563 653.330 - 656.900: 0.0360% ( 3) 00:16:19.563 656.900 - 660.470: 0.0389% ( 3) 00:16:19.563 660.470 - 664.040: 0.0418% ( 3) 00:16:19.563 664.040 - 667.610: 0.0447% ( 3) 00:16:19.563 667.610 - 671.180: 0.0477% ( 3) 00:16:19.563 671.180 - 674.750: 0.0506% ( 3) 00:16:19.563 674.750 - 678.320: 0.0535% ( 3) 00:16:19.563 678.320 - 681.890: 0.0564% ( 3) 00:16:19.563 681.890 - 685.461: 0.0593% ( 3) 00:16:19.563 685.461 - 689.031: 0.0622% ( 3) 00:16:19.563 689.031 - 692.601: 0.0642% ( 2) 00:16:19.563 692.601 - 696.171: 0.0671% ( 3) 00:16:19.563 696.171 - 699.741: 0.0700% ( 3) 00:16:19.563 699.741 - 703.311: 0.0729% ( 3) 00:16:19.563 703.311 - 706.881: 0.0759% ( 3) 00:16:19.563 706.881 - 710.451: 0.0778% ( 2) 00:16:19.563 956.789 - 963.929: 0.0788% ( 1) 00:16:19.563 971.069 - 978.209: 0.0827% ( 4) 00:16:19.563 978.209 - 985.350: 0.0972% ( 15) 00:16:19.563 985.350 - 992.490: 0.1361% ( 40) 00:16:19.563 992.490 - 999.630: 0.2285% ( 95) 00:16:19.563 999.630 - 1006.770: 0.3802% ( 156) 00:16:19.563 1006.770 - 1013.910: 0.6379% ( 265) 00:16:19.563 1013.910 - 1021.051: 1.0493% ( 423) 00:16:19.563 1021.051 - 1028.191: 1.5842% ( 550) 00:16:19.563 1028.191 - 1035.331: 2.3301% ( 767) 00:16:19.563 1035.331 - 1042.471: 3.4504% ( 1152) 00:16:19.563 1042.471 - 1049.611: 4.9334% ( 1525) 00:16:19.563 1049.611 - 1056.752: 6.7072% ( 1824) 00:16:19.563 1056.752 - 1063.892: 8.8107% ( 2163) 00:16:19.563 1063.892 - 1071.032: 11.2010% ( 2458) 00:16:19.563 1071.032 - 1078.172: 13.8248% ( 2698) 00:16:19.564 1078.172 - 1085.313: 16.7043% ( 2961) 00:16:19.564 1085.313 - 1092.453: 19.7569% ( 3139) 00:16:19.564 1092.453 - 1099.593: 22.9291% ( 3262) 00:16:19.564 1099.593 - 1106.733: 26.1947% ( 3358) 00:16:19.564 1106.733 - 1113.873: 29.5809% ( 3482) 00:16:19.564 1113.873 - 1121.014: 33.0332% ( 3550) 00:16:19.564 1121.014 - 1128.154: 36.5107% ( 3576) 00:16:19.564 1128.154 - 1135.294: 39.9903% ( 3578) 00:16:19.564 1135.294 - 1142.434: 43.4377% ( 3545) 00:16:19.564 1142.434 - 1149.574: 46.8453% ( 3504) 00:16:19.564 1149.574 - 1156.715: 50.1254% ( 3373) 00:16:19.564 1156.715 - 1163.855: 53.2656% ( 3229) 00:16:19.564 1163.855 - 1170.995: 56.2598% ( 3079) 00:16:19.564 1170.995 - 1178.135: 59.0178% ( 2836) 00:16:19.564 1178.135 - 1185.276: 61.4130% ( 2463) 00:16:19.564 1185.276 - 1192.416: 63.5223% ( 2169) 00:16:19.564 1192.416 - 1199.556: 65.3710% ( 1901) 00:16:19.564 1199.556 - 1206.696: 66.9581% ( 1632) 00:16:19.564 1206.696 - 1213.836: 68.3021% ( 1382) 00:16:19.564 1213.836 - 1220.977: 69.4282% ( 1158) 00:16:19.564 1220.977 - 1228.117: 70.3676% ( 966) 00:16:19.564 1228.117 - 1235.257: 71.1718% ( 827) 00:16:19.564 1235.257 - 1242.397: 71.8710% ( 719) 00:16:19.564 1242.397 - 1249.537: 72.5148% ( 662) 00:16:19.564 1249.537 - 1256.678: 73.1178% ( 620) 00:16:19.564 1256.678 - 1263.818: 73.6789% ( 577) 00:16:19.564 1263.818 - 1270.958: 74.2633% ( 601) 00:16:19.564 1270.958 - 1278.098: 74.8527% ( 606) 00:16:19.564 1278.098 - 1285.239: 75.4576% ( 622) 00:16:19.564 1285.239 - 1292.379: 76.0313% ( 590) 00:16:19.564 1292.379 - 1299.519: 76.6138% ( 599) 00:16:19.564 1299.519 - 1306.659: 77.2109% ( 614) 00:16:19.564 1306.659 - 1313.799: 77.7837% ( 589) 00:16:19.564 1313.799 - 1320.940: 78.3575% ( 590) 00:16:19.564 1320.940 - 1328.080: 78.9458% ( 605) 00:16:19.564 1328.080 - 1335.220: 79.5507% ( 622) 00:16:19.564 1335.220 - 1342.360: 80.1371% ( 603) 00:16:19.564 1342.360 - 1349.500: 80.7420% ( 622) 00:16:19.564 1349.500 - 1356.641: 81.3430% ( 618) 00:16:19.564 1356.641 - 1363.781: 81.9527% ( 627) 00:16:19.564 1363.781 - 1370.921: 82.5304% ( 594) 00:16:19.564 1370.921 - 1378.061: 83.1110% ( 597) 00:16:19.564 1378.061 - 1385.202: 83.6740% ( 579) 00:16:19.564 1385.202 - 1392.342: 84.2478% ( 590) 00:16:19.564 1392.342 - 1399.482: 84.8147% ( 583) 00:16:19.564 1399.482 - 1406.622: 85.3535% ( 554) 00:16:19.564 1406.622 - 1413.762: 85.8621% ( 523) 00:16:19.564 1413.762 - 1420.903: 86.3629% ( 515) 00:16:19.564 1420.903 - 1428.043: 86.8317% ( 482) 00:16:19.564 1428.043 - 1435.183: 87.2868% ( 468) 00:16:19.564 1435.183 - 1442.323: 87.7254% ( 451) 00:16:19.564 1442.323 - 1449.463: 88.1562% ( 443) 00:16:19.564 1449.463 - 1456.604: 88.5666% ( 422) 00:16:19.564 1456.604 - 1463.744: 88.9575% ( 402) 00:16:19.564 1463.744 - 1470.884: 89.3290% ( 382) 00:16:19.564 1470.884 - 1478.024: 89.6781% ( 359) 00:16:19.564 1478.024 - 1485.165: 89.9971% ( 328) 00:16:19.564 1485.165 - 1492.305: 90.2986% ( 310) 00:16:19.564 1492.305 - 1499.445: 90.5845% ( 294) 00:16:19.564 1499.445 - 1506.585: 90.8490% ( 272) 00:16:19.564 1506.585 - 1513.725: 91.0960% ( 254) 00:16:19.564 1513.725 - 1520.866: 91.3128% ( 223) 00:16:19.564 1520.866 - 1528.006: 91.5268% ( 220) 00:16:19.564 1528.006 - 1535.146: 91.7213% ( 200) 00:16:19.564 1535.146 - 1542.286: 91.8934% ( 177) 00:16:19.564 1542.286 - 1549.426: 92.0519% ( 163) 00:16:19.564 1549.426 - 1556.567: 92.2134% ( 166) 00:16:19.564 1556.567 - 1563.707: 92.3534% ( 144) 00:16:19.564 1563.707 - 1570.847: 92.4808% ( 131) 00:16:19.564 1570.847 - 1577.987: 92.6062% ( 129) 00:16:19.564 1577.987 - 1585.128: 92.7317% ( 129) 00:16:19.564 1585.128 - 1592.268: 92.8552% ( 127) 00:16:19.564 1592.268 - 1599.408: 92.9690% ( 117) 00:16:19.564 1599.408 - 1606.548: 93.0837% ( 118) 00:16:19.564 1606.548 - 1613.688: 93.1878% ( 107) 00:16:19.564 1613.688 - 1620.829: 93.2850% ( 100) 00:16:19.564 1620.829 - 1627.969: 93.3755% ( 93) 00:16:19.564 1627.969 - 1635.109: 93.4659% ( 93) 00:16:19.564 1635.109 - 1642.249: 93.5622% ( 99) 00:16:19.564 1642.249 - 1649.389: 93.6633% ( 104) 00:16:19.564 1649.389 - 1656.530: 93.7586% ( 98) 00:16:19.564 1656.530 - 1663.670: 93.8471% ( 91) 00:16:19.564 1663.670 - 1670.810: 93.9308% ( 86) 00:16:19.564 1670.810 - 1677.950: 94.0105% ( 82) 00:16:19.564 1677.950 - 1685.091: 94.0844% ( 76) 00:16:19.564 1685.091 - 1692.231: 94.1437% ( 61) 00:16:19.564 1692.231 - 1699.371: 94.2099% ( 68) 00:16:19.564 1699.371 - 1706.511: 94.2809% ( 73) 00:16:19.564 1706.511 - 1713.651: 94.3460% ( 67) 00:16:19.564 1713.651 - 1720.792: 94.4131% ( 69) 00:16:19.564 1720.792 - 1727.932: 94.4822% ( 71) 00:16:19.564 1727.932 - 1735.072: 94.5531% ( 73) 00:16:19.564 1735.072 - 1742.212: 94.6309% ( 80) 00:16:19.564 1742.212 - 1749.352: 94.7165% ( 88) 00:16:19.564 1749.352 - 1756.493: 94.7865% ( 72) 00:16:19.564 1756.493 - 1763.633: 94.8536% ( 69) 00:16:19.564 1763.633 - 1770.773: 94.9159% ( 64) 00:16:19.564 1770.773 - 1777.913: 94.9762% ( 62) 00:16:19.564 1777.913 - 1785.054: 95.0316% ( 57) 00:16:19.564 1785.054 - 1792.194: 95.0851% ( 55) 00:16:19.564 1792.194 - 1799.334: 95.1396% ( 56) 00:16:19.564 1799.334 - 1806.474: 95.1882% ( 50) 00:16:19.564 1806.474 - 1813.614: 95.2397% ( 53) 00:16:19.564 1813.614 - 1820.755: 95.2893% ( 51) 00:16:19.564 1820.755 - 1827.895: 95.3389% ( 51) 00:16:19.564 1827.895 - 1842.175: 95.4293% ( 93) 00:16:19.564 1842.175 - 1856.456: 95.5217% ( 95) 00:16:19.564 1856.456 - 1870.736: 95.6073% ( 88) 00:16:19.564 1870.736 - 1885.017: 95.6900% ( 85) 00:16:19.564 1885.017 - 1899.297: 95.7814% ( 94) 00:16:19.564 1899.297 - 1913.577: 95.8747% ( 96) 00:16:19.564 1913.577 - 1927.858: 95.9555% ( 83) 00:16:19.564 1927.858 - 1942.138: 96.0508% ( 98) 00:16:19.564 1942.138 - 1956.419: 96.1529% ( 105) 00:16:19.564 1956.419 - 1970.699: 96.2628% ( 113) 00:16:19.564 1970.699 - 1984.980: 96.3678% ( 108) 00:16:19.564 1984.980 - 1999.260: 96.4670% ( 102) 00:16:19.564 1999.260 - 2013.540: 96.5720% ( 108) 00:16:19.564 2013.540 - 2027.821: 96.6868% ( 118) 00:16:19.564 2027.821 - 2042.101: 96.8122% ( 129) 00:16:19.564 2042.101 - 2056.382: 96.9425% ( 134) 00:16:19.564 2056.382 - 2070.662: 97.0758% ( 137) 00:16:19.564 2070.662 - 2084.943: 97.1944% ( 122) 00:16:19.564 2084.943 - 2099.223: 97.3130% ( 122) 00:16:19.564 2099.223 - 2113.503: 97.4307% ( 121) 00:16:19.564 2113.503 - 2127.784: 97.5309% ( 103) 00:16:19.564 2127.784 - 2142.064: 97.6281% ( 100) 00:16:19.564 2142.064 - 2156.345: 97.7322% ( 107) 00:16:19.564 2156.345 - 2170.625: 97.8168% ( 87) 00:16:19.564 2170.625 - 2184.906: 97.9072% ( 93) 00:16:19.564 2184.906 - 2199.186: 97.9996% ( 95) 00:16:19.564 2199.186 - 2213.466: 98.0939% ( 97) 00:16:19.564 2213.466 - 2227.747: 98.1708% ( 79) 00:16:19.564 2227.747 - 2242.027: 98.2447% ( 76) 00:16:19.564 2242.027 - 2256.308: 98.3234% ( 81) 00:16:19.564 2256.308 - 2270.588: 98.3993% ( 78) 00:16:19.564 2270.588 - 2284.869: 98.4800% ( 83) 00:16:19.564 2284.869 - 2299.149: 98.5588% ( 81) 00:16:19.564 2299.149 - 2313.429: 98.6346% ( 78) 00:16:19.564 2313.429 - 2327.710: 98.7085% ( 76) 00:16:19.564 2327.710 - 2341.990: 98.7708% ( 64) 00:16:19.564 2341.990 - 2356.271: 98.8369% ( 68) 00:16:19.564 2356.271 - 2370.551: 98.9079% ( 73) 00:16:19.564 2370.551 - 2384.832: 98.9643% ( 58) 00:16:19.564 2384.832 - 2399.112: 99.0227% ( 60) 00:16:19.564 2399.112 - 2413.392: 99.0732% ( 52) 00:16:19.564 2413.392 - 2427.673: 99.1219% ( 50) 00:16:19.564 2427.673 - 2441.953: 99.1608% ( 40) 00:16:19.564 2441.953 - 2456.234: 99.1996% ( 40) 00:16:19.564 2456.234 - 2470.514: 99.2376% ( 39) 00:16:19.564 2470.514 - 2484.795: 99.2668% ( 30) 00:16:19.564 2484.795 - 2499.075: 99.2930% ( 27) 00:16:19.564 2499.075 - 2513.355: 99.3105% ( 18) 00:16:19.564 2513.355 - 2527.636: 99.3280% ( 18) 00:16:19.564 2527.636 - 2541.916: 99.3407% ( 13) 00:16:19.564 2541.916 - 2556.197: 99.3494% ( 9) 00:16:19.564 2556.197 - 2570.477: 99.3543% ( 5) 00:16:19.564 2570.477 - 2584.758: 99.3591% ( 5) 00:16:19.564 2584.758 - 2599.038: 99.3630% ( 4) 00:16:19.564 2599.038 - 2613.318: 99.3669% ( 4) 00:16:19.564 2613.318 - 2627.599: 99.3747% ( 8) 00:16:19.564 2627.599 - 2641.879: 99.3864% ( 12) 00:16:19.564 2641.879 - 2656.160: 99.3951% ( 9) 00:16:19.564 2656.160 - 2670.440: 99.4029% ( 8) 00:16:19.564 2670.440 - 2684.721: 99.4136% ( 11) 00:16:19.564 2684.721 - 2699.001: 99.4272% ( 14) 00:16:19.564 2699.001 - 2713.281: 99.4418% ( 15) 00:16:19.564 2713.281 - 2727.562: 99.4583% ( 17) 00:16:19.564 2727.562 - 2741.842: 99.4797% ( 22) 00:16:19.564 2741.842 - 2756.123: 99.5021% ( 23) 00:16:19.564 2756.123 - 2770.403: 99.5274% ( 26) 00:16:19.564 2770.403 - 2784.684: 99.5546% ( 28) 00:16:19.564 2784.684 - 2798.964: 99.5770% ( 23) 00:16:19.564 2798.964 - 2813.244: 99.5935% ( 17) 00:16:19.564 2813.244 - 2827.525: 99.6159% ( 23) 00:16:19.564 2827.525 - 2841.805: 99.6363% ( 21) 00:16:19.564 2841.805 - 2856.086: 99.6577% ( 22) 00:16:19.564 2856.086 - 2870.366: 99.6801% ( 23) 00:16:19.564 2870.366 - 2884.647: 99.7014% ( 22) 00:16:19.565 2884.647 - 2898.927: 99.7219% ( 21) 00:16:19.565 2898.927 - 2913.207: 99.7345% ( 13) 00:16:19.565 2913.207 - 2927.488: 99.7462% ( 12) 00:16:19.565 2927.488 - 2941.768: 99.7647% ( 19) 00:16:19.565 2941.768 - 2956.049: 99.7812% ( 17) 00:16:19.565 2956.049 - 2970.329: 99.7987% ( 18) 00:16:19.565 2970.329 - 2984.610: 99.8143% ( 16) 00:16:19.565 2984.610 - 2998.890: 99.8269% ( 13) 00:16:19.565 2998.890 - 3013.170: 99.8366% ( 10) 00:16:19.565 3013.170 - 3027.451: 99.8425% ( 6) 00:16:19.565 3027.451 - 3041.731: 99.8493% ( 7) 00:16:19.565 3041.731 - 3056.012: 99.8561% ( 7) 00:16:19.565 3056.012 - 3070.292: 99.8619% ( 6) 00:16:19.565 3070.292 - 3084.573: 99.8697% ( 8) 00:16:19.565 3084.573 - 3098.853: 99.8746% ( 5) 00:16:19.565 3098.853 - 3113.133: 99.8794% ( 5) 00:16:19.565 3113.133 - 3127.414: 99.8862% ( 7) 00:16:19.565 3127.414 - 3141.694: 99.8921% ( 6) 00:16:19.565 3141.694 - 3155.975: 99.8940% ( 2) 00:16:19.565 3155.975 - 3170.255: 99.8950% ( 1) 00:16:19.565 3170.255 - 3184.536: 99.8959% ( 1) 00:16:19.565 3184.536 - 3198.816: 99.8979% ( 2) 00:16:19.565 3198.816 - 3213.096: 99.8998% ( 2) 00:16:19.565 3213.096 - 3227.377: 99.9008% ( 1) 00:16:19.565 3227.377 - 3241.657: 99.9028% ( 2) 00:16:19.565 3241.657 - 3255.938: 99.9037% ( 1) 00:16:19.565 3255.938 - 3270.218: 99.9057% ( 2) 00:16:19.565 3270.218 - 3284.499: 99.9076% ( 2) 00:16:19.565 3284.499 - 3298.779: 99.9096% ( 2) 00:16:19.565 3298.779 - 3313.059: 99.9105% ( 1) 00:16:19.565 3313.059 - 3327.340: 99.9125% ( 2) 00:16:19.565 3327.340 - 3341.620: 99.9144% ( 2) 00:16:19.565 3341.620 - 3355.901: 99.9154% ( 1) 00:16:19.565 3355.901 - 3370.181: 99.9173% ( 2) 00:16:19.565 3370.181 - 3384.462: 99.9183% ( 1) 00:16:19.565 3384.462 - 3398.742: 99.9203% ( 2) 00:16:19.565 3398.742 - 3413.022: 99.9212% ( 1) 00:16:19.565 3413.022 - 3427.303: 99.9232% ( 2) 00:16:19.565 3441.583 - 3455.864: 99.9251% ( 2) 00:16:19.565 3455.864 - 3470.144: 99.9271% ( 2) 00:16:19.565 3470.144 - 3484.425: 99.9280% ( 1) 00:16:19.565 3484.425 - 3498.705: 99.9300% ( 2) 00:16:19.565 3498.705 - 3512.985: 99.9319% ( 2) 00:16:19.565 3512.985 - 3527.266: 99.9329% ( 1) 00:16:19.565 3527.266 - 3541.546: 99.9348% ( 2) 00:16:19.565 3541.546 - 3555.827: 99.9368% ( 2) 00:16:19.565 3555.827 - 3570.107: 99.9378% ( 1) 00:16:19.565 3570.107 - 3584.388: 99.9397% ( 2) 00:16:19.565 3584.388 - 3598.668: 99.9417% ( 2) 00:16:19.565 3598.668 - 3612.948: 99.9426% ( 1) 00:16:19.565 3612.948 - 3627.229: 99.9446% ( 2) 00:16:19.565 3627.229 - 3641.509: 99.9465% ( 2) 00:16:19.565 3641.509 - 3655.790: 99.9475% ( 1) 00:16:19.565 3655.790 - 3684.351: 99.9504% ( 3) 00:16:19.565 3684.351 - 3712.911: 99.9514% ( 1) 00:16:19.565 3712.911 - 3741.472: 99.9543% ( 3) 00:16:19.565 3741.472 - 3770.033: 99.9582% ( 4) 00:16:19.565 3770.033 - 3798.594: 99.9611% ( 3) 00:16:19.565 3798.594 - 3827.155: 99.9640% ( 3) 00:16:19.565 3827.155 - 3855.716: 99.9669% ( 3) 00:16:19.565 3855.716 - 3884.277: 99.9708% ( 4) 00:16:19.565 3884.277 - 3912.837: 99.9728% ( 2) 00:16:19.565 3912.837 - 3941.398: 99.9767% ( 4) 00:16:19.565 3941.398 - 3969.959: 99.9796% ( 3) 00:16:19.565 3969.959 - 3998.520: 99.9825% ( 3) 00:16:19.565 3998.520 - 4027.081: 99.9864% ( 4) 00:16:19.565 4027.081 - 4055.642: 99.9893% ( 3) 00:16:19.565 4055.642 - 4084.203: 99.9922% ( 3) 00:16:19.565 4084.203 - 4112.763: 99.9951% ( 3) 00:16:19.565 4112.763 - 4141.324: 99.9971% ( 2) 00:16:19.565 4141.324 - 4169.885: 99.9990% ( 2) 00:16:19.565 4169.885 - 4198.446: 100.0000% ( 1) 00:16:19.565 00:16:19.565 02:41:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:16:19.823 EAL: TSC is not safe to use in SMP mode 00:16:19.823 EAL: TSC is not invariant 00:16:19.823 [2024-07-25 02:41:06.581767] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:20.757 Initializing NVMe Controllers 00:16:20.757 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:20.757 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:20.757 Initialization complete. Launching workers. 00:16:20.757 ======================================================== 00:16:20.757 Latency(us) 00:16:20.757 Device Information : IOPS MiB/s Average min max 00:16:20.757 PCIE (0000:00:10.0) NSID 1 from core 0: 105508.42 1236.43 1213.04 457.01 11767.41 00:16:20.757 ======================================================== 00:16:20.757 Total : 105508.42 1236.43 1213.04 457.01 11767.41 00:16:20.757 00:16:20.757 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:20.757 ================================================================================= 00:16:20.757 1.00000% : 792.564us 00:16:20.757 10.00000% : 999.630us 00:16:20.757 25.00000% : 1078.172us 00:16:20.757 50.00000% : 1142.434us 00:16:20.757 75.00000% : 1342.360us 00:16:20.757 90.00000% : 1456.604us 00:16:20.757 95.00000% : 1506.585us 00:16:20.757 98.00000% : 1770.773us 00:16:20.757 99.00000% : 2456.234us 00:16:20.757 99.50000% : 2941.768us 00:16:20.757 99.90000% : 8910.987us 00:16:20.757 99.99000% : 10453.274us 00:16:20.757 99.99900% : 11767.073us 00:16:20.757 99.99990% : 11824.195us 00:16:20.757 99.99999% : 11824.195us 00:16:20.757 00:16:20.757 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:20.757 ============================================================================== 00:16:20.757 Range in us Cumulative IO count 00:16:20.757 456.974 - 460.544: 0.0009% ( 1) 00:16:20.757 471.254 - 474.824: 0.0019% ( 1) 00:16:20.757 524.806 - 528.376: 0.0028% ( 1) 00:16:20.757 581.927 - 585.498: 0.0038% ( 1) 00:16:20.757 589.068 - 592.638: 0.0047% ( 1) 00:16:20.757 599.778 - 603.348: 0.0057% ( 1) 00:16:20.757 614.058 - 617.629: 0.0076% ( 2) 00:16:20.757 617.629 - 621.199: 0.0095% ( 2) 00:16:20.757 621.199 - 624.769: 0.0104% ( 1) 00:16:20.757 628.339 - 631.909: 0.0114% ( 1) 00:16:20.757 639.049 - 642.619: 0.0123% ( 1) 00:16:20.757 656.900 - 660.470: 0.0133% ( 1) 00:16:20.757 664.040 - 667.610: 0.0180% ( 5) 00:16:20.757 667.610 - 671.180: 0.0246% ( 7) 00:16:20.757 671.180 - 674.750: 0.0256% ( 1) 00:16:20.757 674.750 - 678.320: 0.0284% ( 3) 00:16:20.757 678.320 - 681.890: 0.0332% ( 5) 00:16:20.757 681.890 - 685.461: 0.0455% ( 13) 00:16:20.757 685.461 - 689.031: 0.0512% ( 6) 00:16:20.757 689.031 - 692.601: 0.0521% ( 1) 00:16:20.757 692.601 - 696.171: 0.0578% ( 6) 00:16:20.757 699.741 - 703.311: 0.0606% ( 3) 00:16:20.757 703.311 - 706.881: 0.0654% ( 5) 00:16:20.757 706.881 - 710.451: 0.0729% ( 8) 00:16:20.757 710.451 - 714.021: 0.0853% ( 13) 00:16:20.757 714.021 - 717.592: 0.1004% ( 16) 00:16:20.757 717.592 - 721.162: 0.1089% ( 9) 00:16:20.757 721.162 - 724.732: 0.1222% ( 14) 00:16:20.757 724.732 - 728.302: 0.1364% ( 15) 00:16:20.757 728.302 - 731.872: 0.1866% ( 53) 00:16:20.757 731.872 - 735.442: 0.2046% ( 19) 00:16:20.757 735.442 - 739.012: 0.2463% ( 44) 00:16:20.757 739.012 - 742.582: 0.3022% ( 59) 00:16:20.757 742.582 - 746.152: 0.3477% ( 48) 00:16:20.757 746.152 - 749.722: 0.3988% ( 54) 00:16:20.757 749.722 - 753.293: 0.4254% ( 28) 00:16:20.757 753.293 - 756.863: 0.4690% ( 46) 00:16:20.757 756.863 - 760.433: 0.5201% ( 54) 00:16:20.757 760.433 - 764.003: 0.5457% ( 27) 00:16:20.757 764.003 - 767.573: 0.5902% ( 47) 00:16:20.757 767.573 - 771.143: 0.6253% ( 37) 00:16:20.757 771.143 - 774.713: 0.6689% ( 46) 00:16:20.757 774.713 - 778.283: 0.7067% ( 40) 00:16:20.757 778.283 - 781.853: 0.7787% ( 76) 00:16:20.757 781.853 - 785.424: 0.8555% ( 81) 00:16:20.757 785.424 - 788.994: 0.9682% ( 119) 00:16:20.757 788.994 - 792.564: 1.1084% ( 148) 00:16:20.757 792.564 - 796.134: 1.2505% ( 150) 00:16:20.757 796.134 - 799.704: 1.3623% ( 118) 00:16:20.757 799.704 - 803.274: 1.4722% ( 116) 00:16:20.757 803.274 - 806.844: 1.5963% ( 131) 00:16:20.757 806.844 - 810.414: 1.6996% ( 109) 00:16:20.757 810.414 - 813.984: 1.8256% ( 133) 00:16:20.757 813.984 - 817.555: 1.9374% ( 118) 00:16:20.757 817.555 - 821.125: 2.0482% ( 117) 00:16:20.757 821.125 - 824.695: 2.1610% ( 119) 00:16:20.757 824.695 - 828.265: 2.2879% ( 134) 00:16:20.757 828.265 - 831.835: 2.4054% ( 124) 00:16:20.757 831.835 - 835.405: 2.5532% ( 156) 00:16:20.757 835.405 - 838.975: 2.7019% ( 157) 00:16:20.757 838.975 - 842.545: 2.8204% ( 125) 00:16:20.757 842.545 - 846.115: 2.9871% ( 176) 00:16:20.757 846.115 - 849.685: 3.1813% ( 205) 00:16:20.757 849.685 - 853.256: 3.3547% ( 183) 00:16:20.757 853.256 - 856.826: 3.5262% ( 181) 00:16:20.757 856.826 - 860.396: 3.6967% ( 180) 00:16:20.757 860.396 - 863.966: 3.8701% ( 183) 00:16:20.757 863.966 - 867.536: 4.0055% ( 143) 00:16:20.757 867.536 - 871.106: 4.1476% ( 150) 00:16:20.757 871.106 - 874.676: 4.2604% ( 119) 00:16:20.757 874.676 - 878.246: 4.3864% ( 133) 00:16:20.757 878.246 - 881.816: 4.5067% ( 127) 00:16:20.757 881.816 - 885.387: 4.6062% ( 105) 00:16:20.757 885.387 - 888.957: 4.7246% ( 125) 00:16:20.757 888.957 - 892.527: 4.8819% ( 166) 00:16:20.757 892.527 - 896.097: 5.0571% ( 185) 00:16:20.757 896.097 - 899.667: 5.2343% ( 187) 00:16:20.757 899.667 - 903.237: 5.4238% ( 200) 00:16:20.757 903.237 - 906.807: 5.6085% ( 195) 00:16:20.757 906.807 - 910.377: 5.7904% ( 192) 00:16:20.757 910.377 - 913.947: 5.9211% ( 138) 00:16:20.757 913.947 - 921.088: 6.1684% ( 261) 00:16:20.757 921.088 - 928.228: 6.5559% ( 409) 00:16:20.757 928.228 - 935.368: 6.8249% ( 284) 00:16:20.757 935.368 - 942.508: 7.1025% ( 293) 00:16:20.757 942.508 - 949.648: 7.3157% ( 225) 00:16:20.757 949.648 - 956.789: 7.6160% ( 317) 00:16:20.757 956.789 - 963.929: 7.9106% ( 311) 00:16:20.757 963.929 - 971.069: 8.3673% ( 482) 00:16:20.757 971.069 - 978.209: 8.8059% ( 463) 00:16:20.757 978.209 - 985.350: 9.3374% ( 561) 00:16:20.757 985.350 - 992.490: 9.8376% ( 528) 00:16:20.757 992.490 - 999.630: 10.6296% ( 836) 00:16:20.757 999.630 - 1006.770: 11.5287% ( 949) 00:16:20.757 1006.770 - 1013.910: 12.4846% ( 1009) 00:16:20.757 1013.910 - 1021.051: 13.6385% ( 1218) 00:16:20.757 1021.051 - 1028.191: 14.8720% ( 1302) 00:16:20.757 1028.191 - 1035.331: 16.1538% ( 1353) 00:16:20.757 1035.331 - 1042.471: 17.7454% ( 1680) 00:16:20.757 1042.471 - 1049.611: 19.2195% ( 1556) 00:16:20.758 1049.611 - 1056.752: 20.8746% ( 1747) 00:16:20.758 1056.752 - 1063.892: 22.6860% ( 1912) 00:16:20.758 1063.892 - 1071.032: 24.6642% ( 2088) 00:16:20.758 1071.032 - 1078.172: 26.7427% ( 2194) 00:16:20.758 1078.172 - 1085.313: 29.0979% ( 2486) 00:16:20.758 1085.313 - 1092.453: 32.0916% ( 3160) 00:16:20.758 1092.453 - 1099.593: 35.2208% ( 3303) 00:16:20.758 1099.593 - 1106.733: 38.0526% ( 2989) 00:16:20.758 1106.733 - 1113.873: 41.0501% ( 3164) 00:16:20.758 1113.873 - 1121.014: 43.9140% ( 3023) 00:16:20.758 1121.014 - 1128.154: 46.4502% ( 2677) 00:16:20.758 1128.154 - 1135.294: 48.8916% ( 2577) 00:16:20.758 1135.294 - 1142.434: 51.2060% ( 2443) 00:16:20.758 1142.434 - 1149.574: 53.4845% ( 2405) 00:16:20.758 1149.574 - 1156.715: 55.5327% ( 2162) 00:16:20.758 1156.715 - 1163.855: 57.6852% ( 2272) 00:16:20.758 1163.855 - 1170.995: 59.2502% ( 1652) 00:16:20.758 1170.995 - 1178.135: 60.7945% ( 1630) 00:16:20.758 1178.135 - 1185.276: 62.2354% ( 1521) 00:16:20.758 1185.276 - 1192.416: 63.5921% ( 1432) 00:16:20.758 1192.416 - 1199.556: 64.7166% ( 1187) 00:16:20.758 1199.556 - 1206.696: 65.7796% ( 1122) 00:16:20.758 1206.696 - 1213.836: 66.8189% ( 1097) 00:16:20.758 1213.836 - 1220.977: 67.6933% ( 923) 00:16:20.758 1220.977 - 1228.117: 68.4465% ( 795) 00:16:20.758 1228.117 - 1235.257: 69.1457% ( 738) 00:16:20.758 1235.257 - 1242.397: 69.7823% ( 672) 00:16:20.758 1242.397 - 1249.537: 70.3460% ( 595) 00:16:20.758 1249.537 - 1256.678: 70.8557% ( 538) 00:16:20.758 1256.678 - 1263.818: 71.3663% ( 539) 00:16:20.758 1263.818 - 1270.958: 71.7453% ( 400) 00:16:20.758 1270.958 - 1278.098: 72.1299% ( 406) 00:16:20.758 1278.098 - 1285.239: 72.4975% ( 388) 00:16:20.758 1285.239 - 1292.379: 72.7523% ( 269) 00:16:20.758 1292.379 - 1299.519: 73.0375% ( 301) 00:16:20.758 1299.519 - 1306.659: 73.3492% ( 329) 00:16:20.758 1306.659 - 1313.799: 73.6703% ( 339) 00:16:20.758 1313.799 - 1320.940: 73.9707% ( 317) 00:16:20.758 1320.940 - 1328.080: 74.2918% ( 339) 00:16:20.758 1328.080 - 1335.220: 74.6424% ( 370) 00:16:20.758 1335.220 - 1342.360: 75.0156% ( 394) 00:16:20.758 1342.360 - 1349.500: 75.4609% ( 470) 00:16:20.758 1349.500 - 1356.641: 75.9820% ( 550) 00:16:20.758 1356.641 - 1363.781: 76.5239% ( 572) 00:16:20.758 1363.781 - 1370.921: 77.1586% ( 670) 00:16:20.758 1370.921 - 1378.061: 77.9734% ( 860) 00:16:20.758 1378.061 - 1385.202: 78.9170% ( 996) 00:16:20.758 1385.202 - 1392.342: 79.9060% ( 1044) 00:16:20.758 1392.342 - 1399.482: 81.0097% ( 1165) 00:16:20.758 1399.482 - 1406.622: 82.2015% ( 1258) 00:16:20.758 1406.622 - 1413.762: 83.3839% ( 1248) 00:16:20.758 1413.762 - 1420.903: 84.7794% ( 1473) 00:16:20.758 1420.903 - 1428.043: 86.1351% ( 1431) 00:16:20.758 1428.043 - 1435.183: 87.2530% ( 1180) 00:16:20.758 1435.183 - 1442.323: 88.3642% ( 1173) 00:16:20.758 1442.323 - 1449.463: 89.3978% ( 1091) 00:16:20.758 1449.463 - 1456.604: 90.3689% ( 1025) 00:16:20.758 1456.604 - 1463.744: 91.3428% ( 1028) 00:16:20.758 1463.744 - 1470.884: 92.1917% ( 896) 00:16:20.758 1470.884 - 1478.024: 92.9373% ( 787) 00:16:20.758 1478.024 - 1485.165: 93.6317% ( 733) 00:16:20.758 1485.165 - 1492.305: 94.2437% ( 646) 00:16:20.758 1492.305 - 1499.445: 94.7477% ( 532) 00:16:20.758 1499.445 - 1506.585: 95.1418% ( 416) 00:16:20.758 1506.585 - 1513.725: 95.5748% ( 457) 00:16:20.758 1513.725 - 1520.866: 95.8959% ( 339) 00:16:20.758 1520.866 - 1528.006: 96.1802% ( 300) 00:16:20.758 1528.006 - 1535.146: 96.4132% ( 246) 00:16:20.758 1535.146 - 1542.286: 96.6074% ( 205) 00:16:20.758 1542.286 - 1549.426: 96.7277% ( 127) 00:16:20.758 1549.426 - 1556.567: 96.8575% ( 137) 00:16:20.758 1556.567 - 1563.707: 96.9229% ( 69) 00:16:20.758 1563.707 - 1570.847: 97.0205% ( 103) 00:16:20.758 1570.847 - 1577.987: 97.1219% ( 107) 00:16:20.758 1577.987 - 1585.128: 97.2005% ( 83) 00:16:20.758 1585.128 - 1592.268: 97.2564% ( 59) 00:16:20.758 1592.268 - 1599.408: 97.3000% ( 46) 00:16:20.758 1599.408 - 1606.548: 97.3483% ( 51) 00:16:20.758 1606.548 - 1613.688: 97.3862% ( 40) 00:16:20.758 1613.688 - 1620.829: 97.4241% ( 40) 00:16:20.758 1620.829 - 1627.969: 97.4582% ( 36) 00:16:20.758 1627.969 - 1635.109: 97.4856% ( 29) 00:16:20.758 1635.109 - 1642.249: 97.5330% ( 50) 00:16:20.758 1642.249 - 1649.389: 97.5595% ( 28) 00:16:20.758 1649.389 - 1656.530: 97.6069% ( 50) 00:16:20.758 1656.530 - 1663.670: 97.6486% ( 44) 00:16:20.758 1663.670 - 1670.810: 97.6742% ( 27) 00:16:20.758 1670.810 - 1677.950: 97.7035% ( 31) 00:16:20.758 1677.950 - 1685.091: 97.7282% ( 26) 00:16:20.758 1685.091 - 1692.231: 97.7433% ( 16) 00:16:20.758 1692.231 - 1699.371: 97.7642% ( 22) 00:16:20.758 1699.371 - 1706.511: 97.7841% ( 21) 00:16:20.758 1706.511 - 1713.651: 97.7945% ( 11) 00:16:20.758 1713.651 - 1720.792: 97.8087% ( 15) 00:16:20.758 1720.792 - 1727.932: 97.8333% ( 26) 00:16:20.758 1727.932 - 1735.072: 97.8580% ( 26) 00:16:20.758 1735.072 - 1742.212: 97.8835% ( 27) 00:16:20.758 1742.212 - 1749.352: 97.9195% ( 38) 00:16:20.758 1749.352 - 1756.493: 97.9432% ( 25) 00:16:20.758 1756.493 - 1763.633: 97.9783% ( 37) 00:16:20.758 1763.633 - 1770.773: 98.0058% ( 29) 00:16:20.758 1770.773 - 1777.913: 98.0399% ( 36) 00:16:20.758 1777.913 - 1785.054: 98.0673% ( 29) 00:16:20.758 1785.054 - 1792.194: 98.1081% ( 43) 00:16:20.758 1792.194 - 1799.334: 98.1280% ( 21) 00:16:20.758 1799.334 - 1806.474: 98.1422% ( 15) 00:16:20.758 1806.474 - 1813.614: 98.1621% ( 21) 00:16:20.758 1813.614 - 1820.755: 98.1820% ( 21) 00:16:20.758 1820.755 - 1827.895: 98.1924% ( 11) 00:16:20.758 1827.895 - 1842.175: 98.2256% ( 35) 00:16:20.758 1842.175 - 1856.456: 98.2473% ( 23) 00:16:20.758 1856.456 - 1870.736: 98.2682% ( 22) 00:16:20.758 1870.736 - 1885.017: 98.2890% ( 22) 00:16:20.758 1885.017 - 1899.297: 98.3146% ( 27) 00:16:20.758 1899.297 - 1913.577: 98.3402% ( 27) 00:16:20.758 1913.577 - 1927.858: 98.3714% ( 33) 00:16:20.758 1927.858 - 1942.138: 98.4027% ( 33) 00:16:20.758 1942.138 - 1956.419: 98.4198% ( 18) 00:16:20.758 1956.419 - 1970.699: 98.4397% ( 21) 00:16:20.758 1970.699 - 1984.980: 98.4463% ( 7) 00:16:20.758 1984.980 - 1999.260: 98.4624% ( 17) 00:16:20.758 1999.260 - 2013.540: 98.4880% ( 27) 00:16:20.758 2013.540 - 2027.821: 98.5107% ( 24) 00:16:20.758 2027.821 - 2042.101: 98.5230% ( 13) 00:16:20.758 2042.101 - 2056.382: 98.5325% ( 10) 00:16:20.758 2056.382 - 2070.662: 98.5382% ( 6) 00:16:20.758 2070.662 - 2084.943: 98.5439% ( 6) 00:16:20.758 2084.943 - 2099.223: 98.5723% ( 30) 00:16:20.758 2099.223 - 2113.503: 98.5780% ( 6) 00:16:20.758 2113.503 - 2127.784: 98.5969% ( 20) 00:16:20.758 2127.784 - 2142.064: 98.6301% ( 35) 00:16:20.758 2142.064 - 2156.345: 98.6367% ( 7) 00:16:20.758 2156.345 - 2170.625: 98.6500% ( 14) 00:16:20.758 2170.625 - 2184.906: 98.6689% ( 20) 00:16:20.758 2184.906 - 2199.186: 98.6756% ( 7) 00:16:20.758 2199.186 - 2213.466: 98.6793% ( 4) 00:16:20.758 2213.466 - 2227.747: 98.6926% ( 14) 00:16:20.758 2227.747 - 2242.027: 98.7153% ( 24) 00:16:20.758 2242.027 - 2256.308: 98.7532% ( 40) 00:16:20.758 2256.308 - 2270.588: 98.7968% ( 46) 00:16:20.758 2270.588 - 2284.869: 98.8129% ( 17) 00:16:20.758 2284.869 - 2299.149: 98.8271% ( 15) 00:16:20.758 2299.149 - 2313.429: 98.8451% ( 19) 00:16:20.758 2313.429 - 2327.710: 98.8546% ( 10) 00:16:20.758 2327.710 - 2341.990: 98.8717% ( 18) 00:16:20.758 2341.990 - 2356.271: 98.8878% ( 17) 00:16:20.758 2356.271 - 2370.551: 98.9228% ( 37) 00:16:20.758 2370.551 - 2384.832: 98.9304% ( 8) 00:16:20.758 2384.832 - 2399.112: 98.9361% ( 6) 00:16:20.758 2399.112 - 2413.392: 98.9494% ( 14) 00:16:20.758 2413.392 - 2427.673: 98.9655% ( 17) 00:16:20.758 2427.673 - 2441.953: 98.9948% ( 31) 00:16:20.758 2441.953 - 2456.234: 99.0195% ( 26) 00:16:20.758 2456.234 - 2470.514: 99.0251% ( 6) 00:16:20.758 2470.514 - 2484.795: 99.0441% ( 20) 00:16:20.758 2484.795 - 2499.075: 99.0602% ( 17) 00:16:20.758 2499.075 - 2513.355: 99.0820% ( 23) 00:16:20.758 2513.355 - 2527.636: 99.1294% ( 50) 00:16:20.758 2527.636 - 2541.916: 99.1369% ( 8) 00:16:20.758 2541.916 - 2556.197: 99.1587% ( 23) 00:16:20.758 2556.197 - 2570.477: 99.1767% ( 19) 00:16:20.758 2570.477 - 2584.758: 99.1909% ( 15) 00:16:20.758 2584.758 - 2599.038: 99.1947% ( 4) 00:16:20.758 2599.038 - 2613.318: 99.2298% ( 37) 00:16:20.758 2613.318 - 2627.599: 99.2648% ( 37) 00:16:20.758 2627.599 - 2641.879: 99.2658% ( 1) 00:16:20.758 2641.879 - 2656.160: 99.2686% ( 3) 00:16:20.758 2656.160 - 2670.440: 99.3018% ( 35) 00:16:20.758 2670.440 - 2684.721: 99.3084% ( 7) 00:16:20.758 2684.721 - 2699.001: 99.3131% ( 5) 00:16:20.758 2699.001 - 2713.281: 99.3207% ( 8) 00:16:20.758 2713.281 - 2727.562: 99.3264% ( 6) 00:16:20.758 2727.562 - 2741.842: 99.3558% ( 31) 00:16:20.758 2741.842 - 2756.123: 99.3596% ( 4) 00:16:20.758 2756.123 - 2770.403: 99.3634% ( 4) 00:16:20.758 2784.684 - 2798.964: 99.3671% ( 4) 00:16:20.758 2798.964 - 2813.244: 99.3795% ( 13) 00:16:20.758 2813.244 - 2827.525: 99.4041% ( 26) 00:16:20.758 2827.525 - 2841.805: 99.4221% ( 19) 00:16:20.758 2841.805 - 2856.086: 99.4249% ( 3) 00:16:20.758 2856.086 - 2870.366: 99.4325% ( 8) 00:16:20.758 2870.366 - 2884.647: 99.4410% ( 9) 00:16:20.758 2884.647 - 2898.927: 99.4534% ( 13) 00:16:20.758 2898.927 - 2913.207: 99.4846% ( 33) 00:16:20.758 2913.207 - 2927.488: 99.4950% ( 11) 00:16:20.759 2927.488 - 2941.768: 99.5064% ( 12) 00:16:20.759 2941.768 - 2956.049: 99.5121% ( 6) 00:16:20.759 2956.049 - 2970.329: 99.5225% ( 11) 00:16:20.759 2970.329 - 2984.610: 99.5244% ( 2) 00:16:20.759 3013.170 - 3027.451: 99.5320% ( 8) 00:16:20.759 3027.451 - 3041.731: 99.5358% ( 4) 00:16:20.759 3084.573 - 3098.853: 99.5386% ( 3) 00:16:20.759 3098.853 - 3113.133: 99.5623% ( 25) 00:16:20.759 3113.133 - 3127.414: 99.5642% ( 2) 00:16:20.759 3127.414 - 3141.694: 99.5860% ( 23) 00:16:20.759 3141.694 - 3155.975: 99.6021% ( 17) 00:16:20.759 3155.975 - 3170.255: 99.6049% ( 3) 00:16:20.759 3184.536 - 3198.816: 99.6116% ( 7) 00:16:20.759 3241.657 - 3255.938: 99.6163% ( 5) 00:16:20.759 3327.340 - 3341.620: 99.6277% ( 12) 00:16:20.759 3355.901 - 3370.181: 99.6362% ( 9) 00:16:20.759 3370.181 - 3384.462: 99.6409% ( 5) 00:16:20.759 3384.462 - 3398.742: 99.6457% ( 5) 00:16:20.759 3398.742 - 3413.022: 99.6485% ( 3) 00:16:20.759 3441.583 - 3455.864: 99.6514% ( 3) 00:16:20.759 3455.864 - 3470.144: 99.6552% ( 4) 00:16:20.759 3484.425 - 3498.705: 99.6608% ( 6) 00:16:20.759 3512.985 - 3527.266: 99.6637% ( 3) 00:16:20.759 3570.107 - 3584.388: 99.6665% ( 3) 00:16:20.759 3584.388 - 3598.668: 99.6750% ( 9) 00:16:20.759 3612.948 - 3627.229: 99.6855% ( 11) 00:16:20.759 3627.229 - 3641.509: 99.7016% ( 17) 00:16:20.759 3641.509 - 3655.790: 99.7073% ( 6) 00:16:20.759 3655.790 - 3684.351: 99.7129% ( 6) 00:16:20.759 3712.911 - 3741.472: 99.7148% ( 2) 00:16:20.759 3741.472 - 3770.033: 99.7167% ( 2) 00:16:20.759 3770.033 - 3798.594: 99.7177% ( 1) 00:16:20.759 3855.716 - 3884.277: 99.7215% ( 4) 00:16:20.759 3941.398 - 3969.959: 99.7234% ( 2) 00:16:20.759 3969.959 - 3998.520: 99.7253% ( 2) 00:16:20.759 3998.520 - 4027.081: 99.7309% ( 6) 00:16:20.759 4027.081 - 4055.642: 99.7376% ( 7) 00:16:20.759 4055.642 - 4084.203: 99.7423% ( 5) 00:16:20.759 4141.324 - 4169.885: 99.7461% ( 4) 00:16:20.759 4169.885 - 4198.446: 99.7565% ( 11) 00:16:20.759 4198.446 - 4227.007: 99.7594% ( 3) 00:16:20.759 4227.007 - 4255.568: 99.7622% ( 3) 00:16:20.759 4255.568 - 4284.129: 99.7660% ( 4) 00:16:20.759 4284.129 - 4312.689: 99.7698% ( 4) 00:16:20.759 4312.689 - 4341.250: 99.7774% ( 8) 00:16:20.759 4341.250 - 4369.811: 99.7821% ( 5) 00:16:20.759 4369.811 - 4398.372: 99.7859% ( 4) 00:16:20.759 4398.372 - 4426.933: 99.7897% ( 4) 00:16:20.759 4426.933 - 4455.494: 99.7925% ( 3) 00:16:20.759 4455.494 - 4484.054: 99.7935% ( 1) 00:16:20.759 4598.298 - 4626.859: 99.7944% ( 1) 00:16:20.759 4626.859 - 4655.420: 99.7973% ( 3) 00:16:20.759 4655.420 - 4683.980: 99.8010% ( 4) 00:16:20.759 4683.980 - 4712.541: 99.8048% ( 4) 00:16:20.759 4712.541 - 4741.102: 99.8086% ( 4) 00:16:20.759 4741.102 - 4769.663: 99.8124% ( 4) 00:16:20.759 4769.663 - 4798.224: 99.8153% ( 3) 00:16:20.759 4798.224 - 4826.785: 99.8190% ( 4) 00:16:20.759 4826.785 - 4855.346: 99.8228% ( 4) 00:16:20.759 4855.346 - 4883.906: 99.8266% ( 4) 00:16:20.759 4883.906 - 4912.467: 99.8304% ( 4) 00:16:20.759 4912.467 - 4941.028: 99.8342% ( 4) 00:16:20.759 4941.028 - 4969.589: 99.8380% ( 4) 00:16:20.759 4969.589 - 4998.150: 99.8408% ( 3) 00:16:20.759 4998.150 - 5026.711: 99.8446% ( 4) 00:16:20.759 5026.711 - 5055.272: 99.8484% ( 4) 00:16:20.759 5055.272 - 5083.832: 99.8522% ( 4) 00:16:20.759 5083.832 - 5112.393: 99.8560% ( 4) 00:16:20.759 5112.393 - 5140.954: 99.8598% ( 4) 00:16:20.759 5140.954 - 5169.515: 99.8626% ( 3) 00:16:20.759 5169.515 - 5198.076: 99.8655% ( 3) 00:16:20.759 5198.076 - 5226.637: 99.8674% ( 2) 00:16:20.759 5283.758 - 5312.319: 99.8683% ( 1) 00:16:20.759 5797.854 - 5826.415: 99.8693% ( 1) 00:16:20.759 5826.415 - 5854.976: 99.8712% ( 2) 00:16:20.759 5912.097 - 5940.658: 99.8721% ( 1) 00:16:20.759 6369.071 - 6397.632: 99.8731% ( 1) 00:16:20.759 6397.632 - 6426.193: 99.8740% ( 1) 00:16:20.759 6626.119 - 6654.680: 99.8749% ( 1) 00:16:20.759 6940.288 - 6968.849: 99.8759% ( 1) 00:16:20.759 7025.971 - 7054.532: 99.8768% ( 1) 00:16:20.759 8225.527 - 8282.648: 99.8778% ( 1) 00:16:20.759 8396.892 - 8454.014: 99.8787% ( 1) 00:16:20.759 8511.135 - 8568.257: 99.8797% ( 1) 00:16:20.759 8568.257 - 8625.379: 99.8825% ( 3) 00:16:20.759 8625.379 - 8682.500: 99.8854% ( 3) 00:16:20.759 8682.500 - 8739.622: 99.8882% ( 3) 00:16:20.759 8739.622 - 8796.744: 99.8929% ( 5) 00:16:20.759 8796.744 - 8853.866: 99.8986% ( 6) 00:16:20.759 8853.866 - 8910.987: 99.9015% ( 3) 00:16:20.759 8910.987 - 8968.109: 99.9034% ( 2) 00:16:20.759 9025.231 - 9082.352: 99.9128% ( 10) 00:16:20.759 9082.352 - 9139.474: 99.9289% ( 17) 00:16:20.759 9139.474 - 9196.596: 99.9318% ( 3) 00:16:20.759 9196.596 - 9253.718: 99.9394% ( 8) 00:16:20.759 9253.718 - 9310.839: 99.9479% ( 9) 00:16:20.759 9310.839 - 9367.961: 99.9583% ( 11) 00:16:20.759 9367.961 - 9425.083: 99.9649% ( 7) 00:16:20.759 9425.083 - 9482.204: 99.9678% ( 3) 00:16:20.759 9482.204 - 9539.326: 99.9725% ( 5) 00:16:20.759 9539.326 - 9596.448: 99.9735% ( 1) 00:16:20.759 9596.448 - 9653.570: 99.9744% ( 1) 00:16:20.759 9653.570 - 9710.691: 99.9782% ( 4) 00:16:20.759 9710.691 - 9767.813: 99.9820% ( 4) 00:16:20.759 9767.813 - 9824.935: 99.9839% ( 2) 00:16:20.759 9882.056 - 9939.178: 99.9867% ( 3) 00:16:20.759 10224.787 - 10281.908: 99.9877% ( 1) 00:16:20.759 10396.152 - 10453.274: 99.9905% ( 3) 00:16:20.759 10510.395 - 10567.517: 99.9915% ( 1) 00:16:20.759 10624.639 - 10681.760: 99.9943% ( 3) 00:16:20.759 11481.464 - 11538.586: 99.9981% ( 4) 00:16:20.759 11709.951 - 11767.073: 99.9991% ( 1) 00:16:20.759 11767.073 - 11824.195: 100.0000% ( 1) 00:16:20.759 00:16:22.664 02:41:09 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:16:22.664 00:16:22.664 real 0m4.436s 00:16:22.664 user 0m3.442s 00:16:22.664 sys 0m0.991s 00:16:22.664 02:41:09 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.664 02:41:09 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:16:22.664 ************************************ 00:16:22.664 END TEST nvme_perf 00:16:22.664 ************************************ 00:16:22.664 02:41:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:22.664 02:41:09 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:16:22.664 02:41:09 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:22.664 02:41:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.664 02:41:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.664 ************************************ 00:16:22.664 START TEST nvme_hello_world 00:16:22.664 ************************************ 00:16:22.664 02:41:09 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:16:22.924 EAL: TSC is not safe to use in SMP mode 00:16:22.924 EAL: TSC is not invariant 00:16:22.924 [2024-07-25 02:41:09.581798] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:22.924 Initializing NVMe Controllers 00:16:22.924 Attaching to 0000:00:10.0 00:16:22.924 Attached to 0000:00:10.0 00:16:22.924 Namespace ID: 1 size: 5GB 00:16:22.924 Initialization complete. 00:16:22.924 INFO: using host memory buffer for IO 00:16:22.924 Hello world! 00:16:22.924 00:16:22.924 real 0m0.534s 00:16:22.924 user 0m0.023s 00:16:22.924 sys 0m0.511s 00:16:22.924 02:41:09 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.924 02:41:09 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:22.924 ************************************ 00:16:22.924 END TEST nvme_hello_world 00:16:22.924 ************************************ 00:16:22.924 02:41:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:22.924 02:41:09 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:16:22.924 02:41:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:22.924 02:41:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.924 02:41:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.924 ************************************ 00:16:22.924 START TEST nvme_sgl 00:16:22.924 ************************************ 00:16:22.924 02:41:09 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:16:23.494 EAL: TSC is not safe to use in SMP mode 00:16:23.494 EAL: TSC is not invariant 00:16:23.494 [2024-07-25 02:41:10.196854] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:23.494 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:16:23.494 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:16:23.494 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:16:23.494 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:16:23.494 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:16:23.494 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:16:23.494 NVMe Readv/Writev Request test 00:16:23.494 Attaching to 0000:00:10.0 00:16:23.494 Attached to 0000:00:10.0 00:16:23.494 0000:00:10.0: build_io_request_2 test passed 00:16:23.494 0000:00:10.0: build_io_request_4 test passed 00:16:23.494 0000:00:10.0: build_io_request_5 test passed 00:16:23.494 0000:00:10.0: build_io_request_6 test passed 00:16:23.494 0000:00:10.0: build_io_request_7 test passed 00:16:23.494 0000:00:10.0: build_io_request_10 test passed 00:16:23.494 Cleaning up... 00:16:23.494 00:16:23.494 real 0m0.559s 00:16:23.494 user 0m0.028s 00:16:23.494 sys 0m0.531s 00:16:23.494 02:41:10 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.494 02:41:10 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:16:23.494 ************************************ 00:16:23.494 END TEST nvme_sgl 00:16:23.494 ************************************ 00:16:23.494 02:41:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:23.494 02:41:10 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:16:23.494 02:41:10 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:23.494 02:41:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.494 02:41:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.494 ************************************ 00:16:23.494 START TEST nvme_e2edp 00:16:23.494 ************************************ 00:16:23.494 02:41:10 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:16:24.064 EAL: TSC is not safe to use in SMP mode 00:16:24.064 EAL: TSC is not invariant 00:16:24.064 [2024-07-25 02:41:10.802265] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:24.064 NVMe Write/Read with End-to-End data protection test 00:16:24.064 Attaching to 0000:00:10.0 00:16:24.064 Attached to 0000:00:10.0 00:16:24.064 Cleaning up... 00:16:24.064 00:16:24.064 real 0m0.536s 00:16:24.064 user 0m0.004s 00:16:24.064 sys 0m0.536s 00:16:24.064 02:41:10 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.064 02:41:10 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:16:24.064 ************************************ 00:16:24.064 END TEST nvme_e2edp 00:16:24.064 ************************************ 00:16:24.064 02:41:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:24.064 02:41:10 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:16:24.064 02:41:10 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:24.065 02:41:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.065 02:41:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.065 ************************************ 00:16:24.065 START TEST nvme_reserve 00:16:24.065 ************************************ 00:16:24.065 02:41:10 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:16:24.635 EAL: TSC is not safe to use in SMP mode 00:16:24.635 EAL: TSC is not invariant 00:16:24.635 [2024-07-25 02:41:11.369229] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:24.635 ===================================================== 00:16:24.635 NVMe Controller at PCI bus 0, device 16, function 0 00:16:24.635 ===================================================== 00:16:24.635 Reservations: Not Supported 00:16:24.635 Reservation test passed 00:16:24.635 00:16:24.635 real 0m0.502s 00:16:24.635 user 0m0.002s 00:16:24.635 sys 0m0.500s 00:16:24.635 02:41:11 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.635 02:41:11 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:16:24.635 ************************************ 00:16:24.635 END TEST nvme_reserve 00:16:24.635 ************************************ 00:16:24.635 02:41:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:24.635 02:41:11 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:16:24.635 02:41:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:24.635 02:41:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.635 02:41:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.635 ************************************ 00:16:24.635 START TEST nvme_err_injection 00:16:24.635 ************************************ 00:16:24.635 02:41:11 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:16:25.576 EAL: TSC is not safe to use in SMP mode 00:16:25.576 EAL: TSC is not invariant 00:16:25.576 [2024-07-25 02:41:12.264932] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:25.576 NVMe Error Injection test 00:16:25.576 Attaching to 0000:00:10.0 00:16:25.576 Attached to 0000:00:10.0 00:16:25.576 0000:00:10.0: get features failed as expected 00:16:25.576 0000:00:10.0: get features successfully as expected 00:16:25.576 0000:00:10.0: read failed as expected 00:16:25.576 0000:00:10.0: read successfully as expected 00:16:25.576 Cleaning up... 00:16:25.576 00:16:25.576 real 0m0.834s 00:16:25.576 user 0m0.015s 00:16:25.576 sys 0m0.818s 00:16:25.576 02:41:12 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.576 02:41:12 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:16:25.576 ************************************ 00:16:25.576 END TEST nvme_err_injection 00:16:25.576 ************************************ 00:16:25.576 02:41:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:25.576 02:41:12 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:25.576 02:41:12 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:16:25.576 02:41:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.576 02:41:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.576 ************************************ 00:16:25.576 START TEST nvme_overhead 00:16:25.576 ************************************ 00:16:25.576 02:41:12 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:26.146 EAL: TSC is not safe to use in SMP mode 00:16:26.146 EAL: TSC is not invariant 00:16:26.146 [2024-07-25 02:41:12.843860] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:27.094 Initializing NVMe Controllers 00:16:27.094 Attaching to 0000:00:10.0 00:16:27.094 Attached to 0000:00:10.0 00:16:27.094 Initialization complete. Launching workers. 00:16:27.094 submit (in ns) avg, min, max = 7939.5, 6085.6, 38941.7 00:16:27.094 complete (in ns) avg, min, max = 8738.7, 4901.1, 78883.2 00:16:27.094 00:16:27.094 Submit histogram 00:16:27.094 ================ 00:16:27.094 Range in us Cumulative Count 00:16:27.094 6.080 - 6.108: 0.0145% ( 1) 00:16:27.094 6.136 - 6.164: 0.0291% ( 1) 00:16:27.094 6.192 - 6.220: 0.0436% ( 1) 00:16:27.094 6.220 - 6.248: 0.0726% ( 2) 00:16:27.094 6.248 - 6.276: 0.0872% ( 1) 00:16:27.094 6.276 - 6.303: 0.1017% ( 1) 00:16:27.094 6.303 - 6.331: 0.1307% ( 2) 00:16:27.094 6.331 - 6.359: 0.1888% ( 4) 00:16:27.094 6.359 - 6.387: 0.2760% ( 6) 00:16:27.094 6.387 - 6.415: 0.3632% ( 6) 00:16:27.094 6.415 - 6.443: 0.4794% ( 8) 00:16:27.094 6.443 - 6.471: 0.5956% ( 8) 00:16:27.094 6.471 - 6.499: 0.7990% ( 14) 00:16:27.094 6.499 - 6.527: 1.0169% ( 15) 00:16:27.094 6.527 - 6.554: 1.3074% ( 20) 00:16:27.094 6.554 - 6.582: 1.5543% ( 17) 00:16:27.094 6.582 - 6.610: 1.9465% ( 27) 00:16:27.094 6.610 - 6.638: 2.3823% ( 30) 00:16:27.094 6.638 - 6.666: 2.7019% ( 22) 00:16:27.094 6.666 - 6.694: 3.1813% ( 33) 00:16:27.094 6.694 - 6.722: 3.6461% ( 32) 00:16:27.094 6.722 - 6.750: 4.3579% ( 49) 00:16:27.094 6.750 - 6.778: 5.0116% ( 45) 00:16:27.094 6.778 - 6.806: 5.7234% ( 49) 00:16:27.094 6.806 - 6.833: 6.5660% ( 58) 00:16:27.094 6.833 - 6.861: 7.5247% ( 66) 00:16:27.094 6.861 - 6.889: 8.6578% ( 78) 00:16:27.094 6.889 - 6.917: 9.7037% ( 72) 00:16:27.094 6.917 - 6.945: 11.0110% ( 90) 00:16:27.094 6.945 - 6.973: 12.4201% ( 97) 00:16:27.094 6.973 - 7.001: 14.0616% ( 113) 00:16:27.094 7.001 - 7.029: 15.7031% ( 113) 00:16:27.094 7.029 - 7.057: 17.7949% ( 144) 00:16:27.094 7.057 - 7.084: 19.9012% ( 145) 00:16:27.094 7.084 - 7.112: 21.9349% ( 140) 00:16:27.094 7.112 - 7.140: 24.2301% ( 158) 00:16:27.094 7.140 - 7.196: 29.1400% ( 338) 00:16:27.094 7.196 - 7.252: 33.9338% ( 330) 00:16:27.094 7.252 - 7.308: 38.7275% ( 330) 00:16:27.094 7.308 - 7.363: 42.7223% ( 275) 00:16:27.094 7.363 - 7.419: 47.2255% ( 310) 00:16:27.094 7.419 - 7.475: 51.1185% ( 268) 00:16:27.094 7.475 - 7.531: 54.2127% ( 213) 00:16:27.094 7.531 - 7.586: 56.5224% ( 159) 00:16:27.094 7.586 - 7.642: 57.9750% ( 100) 00:16:27.094 7.642 - 7.698: 59.1952% ( 84) 00:16:27.094 7.698 - 7.754: 60.2557% ( 73) 00:16:27.094 7.754 - 7.810: 61.2289% ( 67) 00:16:27.094 7.810 - 7.865: 62.2894% ( 73) 00:16:27.094 7.865 - 7.921: 63.1319% ( 58) 00:16:27.094 7.921 - 7.977: 63.9454% ( 56) 00:16:27.094 7.977 - 8.033: 64.4538% ( 35) 00:16:27.094 8.033 - 8.089: 65.2382% ( 54) 00:16:27.094 8.089 - 8.144: 65.8919% ( 45) 00:16:27.094 8.144 - 8.200: 66.4149% ( 36) 00:16:27.094 8.200 - 8.256: 66.9233% ( 35) 00:16:27.094 8.256 - 8.312: 67.5479% ( 43) 00:16:27.094 8.312 - 8.367: 68.2597% ( 49) 00:16:27.094 8.367 - 8.423: 69.3056% ( 72) 00:16:27.094 8.423 - 8.479: 70.1917% ( 61) 00:16:27.094 8.479 - 8.535: 71.2377% ( 72) 00:16:27.094 8.535 - 8.591: 72.5015% ( 87) 00:16:27.094 8.591 - 8.646: 73.8234% ( 91) 00:16:27.094 8.646 - 8.702: 75.2615% ( 99) 00:16:27.094 8.702 - 8.758: 77.3678% ( 145) 00:16:27.094 8.758 - 8.814: 79.8518% ( 171) 00:16:27.094 8.814 - 8.869: 82.6554% ( 193) 00:16:27.094 8.869 - 8.925: 84.9651% ( 159) 00:16:27.094 8.925 - 8.981: 86.8100% ( 127) 00:16:27.094 8.981 - 9.037: 88.5096% ( 117) 00:16:27.094 9.037 - 9.093: 90.1365% ( 112) 00:16:27.094 9.093 - 9.148: 91.3277% ( 82) 00:16:27.094 9.148 - 9.204: 92.2138% ( 61) 00:16:27.094 9.204 - 9.260: 92.9256% ( 49) 00:16:27.094 9.260 - 9.316: 93.5212% ( 41) 00:16:27.094 9.316 - 9.372: 94.0296% ( 35) 00:16:27.094 9.372 - 9.427: 94.6979% ( 46) 00:16:27.094 9.427 - 9.483: 95.1917% ( 34) 00:16:27.094 9.483 - 9.539: 95.4678% ( 19) 00:16:27.094 9.539 - 9.595: 95.7873% ( 22) 00:16:27.094 9.595 - 9.650: 96.0052% ( 15) 00:16:27.094 9.650 - 9.706: 96.1941% ( 13) 00:16:27.094 9.706 - 9.762: 96.3974% ( 14) 00:16:27.094 9.762 - 9.818: 96.5718% ( 12) 00:16:27.094 9.818 - 9.874: 96.7606% ( 13) 00:16:27.094 9.874 - 9.929: 96.8768% ( 8) 00:16:27.094 9.929 - 9.985: 97.0076% ( 9) 00:16:27.094 9.985 - 10.041: 97.1092% ( 7) 00:16:27.094 10.041 - 10.097: 97.1964% ( 6) 00:16:27.094 10.097 - 10.152: 97.2109% ( 1) 00:16:27.094 10.152 - 10.208: 97.2690% ( 4) 00:16:27.094 10.208 - 10.264: 97.3417% ( 5) 00:16:27.094 10.264 - 10.320: 97.4433% ( 7) 00:16:27.094 10.320 - 10.376: 97.5305% ( 6) 00:16:27.094 10.376 - 10.431: 97.5450% ( 1) 00:16:27.094 10.487 - 10.543: 97.5886% ( 3) 00:16:27.094 10.599 - 10.655: 97.6612% ( 5) 00:16:27.094 10.655 - 10.710: 97.6758% ( 1) 00:16:27.094 10.710 - 10.766: 97.6903% ( 1) 00:16:27.094 10.766 - 10.822: 97.7048% ( 1) 00:16:27.094 10.822 - 10.878: 97.7339% ( 2) 00:16:27.094 10.878 - 10.933: 97.7775% ( 3) 00:16:27.094 10.933 - 10.989: 97.8356% ( 4) 00:16:27.094 10.989 - 11.045: 97.8646% ( 2) 00:16:27.094 11.045 - 11.101: 97.8791% ( 1) 00:16:27.094 11.101 - 11.157: 97.9082% ( 2) 00:16:27.094 11.157 - 11.212: 97.9372% ( 2) 00:16:27.094 11.212 - 11.268: 97.9663% ( 2) 00:16:27.094 11.380 - 11.435: 97.9808% ( 1) 00:16:27.094 11.435 - 11.491: 97.9954% ( 1) 00:16:27.094 11.491 - 11.547: 98.0389% ( 3) 00:16:27.094 11.547 - 11.603: 98.0680% ( 2) 00:16:27.094 11.603 - 11.659: 98.0825% ( 1) 00:16:27.094 11.659 - 11.714: 98.0970% ( 1) 00:16:27.094 11.714 - 11.770: 98.1406% ( 3) 00:16:27.094 11.770 - 11.826: 98.1551% ( 1) 00:16:27.094 11.826 - 11.882: 98.2132% ( 4) 00:16:27.094 11.882 - 11.938: 98.2278% ( 1) 00:16:27.094 11.993 - 12.049: 98.2714% ( 3) 00:16:27.094 12.105 - 12.161: 98.3004% ( 2) 00:16:27.094 12.161 - 12.216: 98.3585% ( 4) 00:16:27.094 12.216 - 12.272: 98.4021% ( 3) 00:16:27.094 12.272 - 12.328: 98.5038% ( 7) 00:16:27.094 12.328 - 12.384: 98.5764% ( 5) 00:16:27.094 12.384 - 12.440: 98.6345% ( 4) 00:16:27.094 12.440 - 12.495: 98.6926% ( 4) 00:16:27.094 12.495 - 12.551: 98.7217% ( 2) 00:16:27.094 12.551 - 12.607: 98.7798% ( 4) 00:16:27.094 12.607 - 12.663: 98.7943% ( 1) 00:16:27.094 12.663 - 12.719: 98.8524% ( 4) 00:16:27.094 12.774 - 12.830: 98.9250% ( 5) 00:16:27.094 12.830 - 12.886: 98.9541% ( 2) 00:16:27.094 12.886 - 12.942: 99.0267% ( 5) 00:16:27.094 12.997 - 13.053: 99.0558% ( 2) 00:16:27.095 13.053 - 13.109: 99.0703% ( 1) 00:16:27.095 13.109 - 13.165: 99.0848% ( 1) 00:16:27.095 13.165 - 13.221: 99.1284% ( 3) 00:16:27.095 13.221 - 13.276: 99.1429% ( 1) 00:16:27.095 13.332 - 13.388: 99.1575% ( 1) 00:16:27.095 13.444 - 13.499: 99.1720% ( 1) 00:16:27.095 13.667 - 13.723: 99.1865% ( 1) 00:16:27.095 13.723 - 13.778: 99.2010% ( 1) 00:16:27.095 13.834 - 13.890: 99.2156% ( 1) 00:16:27.095 13.890 - 13.946: 99.2301% ( 1) 00:16:27.095 13.946 - 14.002: 99.2446% ( 1) 00:16:27.095 14.057 - 14.113: 99.2592% ( 1) 00:16:27.095 14.113 - 14.169: 99.2737% ( 1) 00:16:27.095 14.392 - 14.504: 99.3027% ( 2) 00:16:27.095 14.727 - 14.838: 99.3173% ( 1) 00:16:27.095 15.954 - 16.065: 99.3318% ( 1) 00:16:27.095 16.065 - 16.177: 99.3463% ( 1) 00:16:27.095 16.289 - 16.400: 99.3754% ( 2) 00:16:27.095 16.400 - 16.512: 99.3899% ( 1) 00:16:27.095 16.512 - 16.623: 99.4044% ( 1) 00:16:27.095 16.623 - 16.735: 99.4480% ( 3) 00:16:27.095 16.735 - 16.846: 99.4916% ( 3) 00:16:27.095 16.846 - 16.958: 99.5061% ( 1) 00:16:27.095 16.958 - 17.070: 99.5642% ( 4) 00:16:27.095 17.070 - 17.181: 99.6078% ( 3) 00:16:27.095 17.181 - 17.293: 99.6368% ( 2) 00:16:27.095 17.293 - 17.404: 99.6514% ( 1) 00:16:27.095 17.404 - 17.516: 99.6949% ( 3) 00:16:27.095 17.516 - 17.627: 99.7095% ( 1) 00:16:27.095 17.627 - 17.739: 99.7531% ( 3) 00:16:27.095 17.739 - 17.851: 99.7821% ( 2) 00:16:27.095 17.851 - 17.962: 99.7966% ( 1) 00:16:27.095 18.074 - 18.185: 99.8112% ( 1) 00:16:27.095 19.412 - 19.524: 99.8257% ( 1) 00:16:27.095 19.747 - 19.859: 99.8402% ( 1) 00:16:27.095 19.859 - 19.970: 99.8547% ( 1) 00:16:27.095 20.193 - 20.305: 99.8693% ( 1) 00:16:27.095 20.528 - 20.640: 99.8838% ( 1) 00:16:27.095 23.652 - 23.764: 99.8983% ( 1) 00:16:27.095 23.875 - 23.987: 99.9419% ( 3) 00:16:27.095 24.991 - 25.102: 99.9564% ( 1) 00:16:27.095 25.214 - 25.325: 99.9709% ( 1) 00:16:27.095 35.032 - 35.255: 99.9855% ( 1) 00:16:27.095 38.825 - 39.048: 100.0000% ( 1) 00:16:27.095 00:16:27.095 Complete histogram 00:16:27.095 ================== 00:16:27.095 Range in us Cumulative Count 00:16:27.095 4.881 - 4.909: 0.0436% ( 3) 00:16:27.095 4.937 - 4.965: 0.1017% ( 4) 00:16:27.095 4.965 - 4.993: 0.1888% ( 6) 00:16:27.095 4.993 - 5.020: 0.2615% ( 5) 00:16:27.095 5.020 - 5.048: 0.3777% ( 8) 00:16:27.095 5.048 - 5.076: 0.4939% ( 8) 00:16:27.095 5.076 - 5.104: 0.6537% ( 11) 00:16:27.095 5.104 - 5.132: 0.9152% ( 18) 00:16:27.095 5.132 - 5.160: 1.2783% ( 25) 00:16:27.095 5.160 - 5.188: 1.6560% ( 26) 00:16:27.095 5.188 - 5.216: 2.1209% ( 32) 00:16:27.095 5.216 - 5.244: 2.8762% ( 52) 00:16:27.095 5.244 - 5.271: 3.6026% ( 50) 00:16:27.095 5.271 - 5.299: 4.3144% ( 49) 00:16:27.095 5.299 - 5.327: 5.1133% ( 55) 00:16:27.095 5.327 - 5.355: 6.1302% ( 70) 00:16:27.095 5.355 - 5.383: 7.0598% ( 64) 00:16:27.095 5.383 - 5.411: 8.0622% ( 69) 00:16:27.095 5.411 - 5.439: 9.0209% ( 66) 00:16:27.095 5.439 - 5.467: 9.8489% ( 57) 00:16:27.095 5.467 - 5.495: 10.5898% ( 51) 00:16:27.095 5.495 - 5.523: 11.5195% ( 64) 00:16:27.095 5.523 - 5.550: 12.3911% ( 60) 00:16:27.095 5.550 - 5.578: 13.1174% ( 50) 00:16:27.095 5.578 - 5.606: 13.8582% ( 51) 00:16:27.095 5.606 - 5.634: 14.4248% ( 39) 00:16:27.095 5.634 - 5.662: 14.9332% ( 35) 00:16:27.095 5.662 - 5.690: 15.4126% ( 33) 00:16:27.095 5.690 - 5.718: 15.9500% ( 37) 00:16:27.095 5.718 - 5.746: 16.5892% ( 44) 00:16:27.095 5.746 - 5.774: 17.3010% ( 49) 00:16:27.095 5.774 - 5.801: 18.0128% ( 49) 00:16:27.095 5.801 - 5.829: 18.8408% ( 57) 00:16:27.095 5.829 - 5.857: 19.6543% ( 56) 00:16:27.095 5.857 - 5.885: 20.5404% ( 61) 00:16:27.095 5.885 - 5.913: 21.3393% ( 55) 00:16:27.095 5.913 - 5.941: 22.0802% ( 51) 00:16:27.095 5.941 - 5.969: 22.6758% ( 41) 00:16:27.095 5.969 - 5.997: 23.2859% ( 42) 00:16:27.095 5.997 - 6.025: 23.9250% ( 44) 00:16:27.095 6.025 - 6.052: 24.3608% ( 30) 00:16:27.095 6.052 - 6.080: 24.8257% ( 32) 00:16:27.095 6.080 - 6.108: 25.0291% ( 14) 00:16:27.095 6.108 - 6.136: 25.2760% ( 17) 00:16:27.095 6.136 - 6.164: 25.4358% ( 11) 00:16:27.095 6.164 - 6.192: 25.6973% ( 18) 00:16:27.095 6.192 - 6.220: 25.8425% ( 10) 00:16:27.095 6.220 - 6.248: 25.9878% ( 10) 00:16:27.095 6.248 - 6.276: 26.1040% ( 8) 00:16:27.095 6.276 - 6.303: 26.2202% ( 8) 00:16:27.095 6.303 - 6.331: 26.3510% ( 9) 00:16:27.095 6.331 - 6.359: 26.5253% ( 12) 00:16:27.095 6.359 - 6.387: 26.8449% ( 22) 00:16:27.095 6.387 - 6.415: 27.6293% ( 54) 00:16:27.095 6.415 - 6.443: 29.3725% ( 120) 00:16:27.095 6.443 - 6.471: 31.1447% ( 122) 00:16:27.095 6.471 - 6.499: 32.2777% ( 78) 00:16:27.095 6.499 - 6.527: 33.0912% ( 56) 00:16:27.095 6.527 - 6.554: 34.7472% ( 114) 00:16:27.095 6.554 - 6.582: 37.2022% ( 169) 00:16:27.095 6.582 - 6.610: 39.0180% ( 125) 00:16:27.095 6.610 - 6.638: 39.9477% ( 64) 00:16:27.095 6.638 - 6.666: 40.4271% ( 33) 00:16:27.095 6.666 - 6.694: 40.7757% ( 24) 00:16:27.095 6.694 - 6.722: 41.1534% ( 26) 00:16:27.095 6.722 - 6.750: 41.5020% ( 24) 00:16:27.095 6.750 - 6.778: 41.8797% ( 26) 00:16:27.095 6.778 - 6.806: 42.1993% ( 22) 00:16:27.095 6.806 - 6.833: 42.4898% ( 20) 00:16:27.095 6.833 - 6.861: 42.6496% ( 11) 00:16:27.095 6.861 - 6.889: 42.7804% ( 9) 00:16:27.095 6.889 - 6.917: 42.9692% ( 13) 00:16:27.095 6.917 - 6.945: 43.2888% ( 22) 00:16:27.095 6.945 - 6.973: 43.3905% ( 7) 00:16:27.095 6.973 - 7.001: 43.5503% ( 11) 00:16:27.095 7.001 - 7.029: 43.6229% ( 5) 00:16:27.095 7.029 - 7.057: 43.6665% ( 3) 00:16:27.095 7.084 - 7.112: 43.6955% ( 2) 00:16:27.095 7.112 - 7.140: 43.7391% ( 3) 00:16:27.095 7.140 - 7.196: 43.8698% ( 9) 00:16:27.095 7.196 - 7.252: 44.0151% ( 10) 00:16:27.095 7.252 - 7.308: 44.0732% ( 4) 00:16:27.095 7.308 - 7.363: 44.1749% ( 7) 00:16:27.095 7.363 - 7.419: 44.3347% ( 11) 00:16:27.095 7.419 - 7.475: 44.4073% ( 5) 00:16:27.095 7.475 - 7.531: 44.4509% ( 3) 00:16:27.095 7.531 - 7.586: 44.4800% ( 2) 00:16:27.095 7.586 - 7.642: 44.5090% ( 2) 00:16:27.095 7.642 - 7.698: 44.5381% ( 2) 00:16:27.095 7.698 - 7.754: 44.6107% ( 5) 00:16:27.095 7.810 - 7.865: 44.6252% ( 1) 00:16:27.095 7.865 - 7.921: 44.6543% ( 2) 00:16:27.095 7.921 - 7.977: 44.6833% ( 2) 00:16:27.095 7.977 - 8.033: 44.6979% ( 1) 00:16:27.095 8.033 - 8.089: 44.7560% ( 4) 00:16:27.095 8.144 - 8.200: 44.7850% ( 2) 00:16:27.095 8.200 - 8.256: 44.7995% ( 1) 00:16:27.095 8.256 - 8.312: 44.8286% ( 2) 00:16:27.095 8.367 - 8.423: 44.8431% ( 1) 00:16:27.095 8.423 - 8.479: 44.8576% ( 1) 00:16:27.095 8.535 - 8.591: 44.8722% ( 1) 00:16:27.095 8.702 - 8.758: 44.9157% ( 3) 00:16:27.095 8.814 - 8.869: 44.9303% ( 1) 00:16:27.095 8.925 - 8.981: 44.9448% ( 1) 00:16:27.095 8.981 - 9.037: 44.9739% ( 2) 00:16:27.095 9.037 - 9.093: 44.9884% ( 1) 00:16:27.095 9.093 - 9.148: 45.0174% ( 2) 00:16:27.095 9.148 - 9.204: 45.0610% ( 3) 00:16:27.095 9.204 - 9.260: 45.0755% ( 1) 00:16:27.095 9.260 - 9.316: 45.1482% ( 5) 00:16:27.095 9.316 - 9.372: 45.1772% ( 2) 00:16:27.095 9.372 - 9.427: 45.1917% ( 1) 00:16:27.095 9.427 - 9.483: 45.2353% ( 3) 00:16:27.095 9.483 - 9.539: 45.2934% ( 4) 00:16:27.095 9.539 - 9.595: 45.3225% ( 2) 00:16:27.095 9.595 - 9.650: 45.4242% ( 7) 00:16:27.095 9.650 - 9.706: 45.5404% ( 8) 00:16:27.095 9.706 - 9.762: 45.7438% ( 14) 00:16:27.095 9.762 - 9.818: 46.0052% ( 18) 00:16:27.095 9.818 - 9.874: 46.3393% ( 23) 00:16:27.095 9.874 - 9.929: 46.7751% ( 30) 00:16:27.095 9.929 - 9.985: 47.4288% ( 45) 00:16:27.095 9.985 - 10.041: 48.2859% ( 59) 00:16:27.095 10.041 - 10.097: 49.1429% ( 59) 00:16:27.095 10.097 - 10.152: 50.4213% ( 88) 00:16:27.095 10.152 - 10.208: 52.0046% ( 109) 00:16:27.095 10.208 - 10.264: 53.7914% ( 123) 00:16:27.095 10.264 - 10.320: 55.7670% ( 136) 00:16:27.095 10.320 - 10.376: 57.7716% ( 138) 00:16:27.095 10.376 - 10.431: 60.1249% ( 162) 00:16:27.095 10.431 - 10.487: 62.6380% ( 173) 00:16:27.095 10.487 - 10.543: 65.1075% ( 170) 00:16:27.095 10.543 - 10.599: 67.8820% ( 191) 00:16:27.095 10.599 - 10.655: 70.1772% ( 158) 00:16:27.095 10.655 - 10.710: 72.5015% ( 160) 00:16:27.095 10.710 - 10.766: 74.4625% ( 135) 00:16:27.096 10.766 - 10.822: 76.5543% ( 144) 00:16:27.096 10.822 - 10.878: 78.3266% ( 122) 00:16:27.096 10.878 - 10.933: 79.9680% ( 113) 00:16:27.096 10.933 - 10.989: 81.7403% ( 122) 00:16:27.096 10.989 - 11.045: 83.1639% ( 98) 00:16:27.096 11.045 - 11.101: 84.4277% ( 87) 00:16:27.096 11.101 - 11.157: 85.6479% ( 84) 00:16:27.096 11.157 - 11.212: 86.8681% ( 84) 00:16:27.096 11.212 - 11.268: 88.0593% ( 82) 00:16:27.096 11.268 - 11.324: 89.1052% ( 72) 00:16:27.096 11.324 - 11.380: 89.9913% ( 61) 00:16:27.096 11.380 - 11.435: 90.8629% ( 60) 00:16:27.096 11.435 - 11.491: 91.7780% ( 63) 00:16:27.096 11.491 - 11.547: 92.3300% ( 38) 00:16:27.096 11.547 - 11.603: 92.8239% ( 34) 00:16:27.096 11.603 - 11.659: 93.1871% ( 25) 00:16:27.096 11.659 - 11.714: 93.6519% ( 32) 00:16:27.096 11.714 - 11.770: 94.0442% ( 27) 00:16:27.096 11.770 - 11.826: 94.5381% ( 34) 00:16:27.096 11.826 - 11.882: 94.8576% ( 22) 00:16:27.096 11.882 - 11.938: 95.2063% ( 24) 00:16:27.096 11.938 - 11.993: 95.6130% ( 28) 00:16:27.096 11.993 - 12.049: 95.9471% ( 23) 00:16:27.096 12.049 - 12.105: 96.2086% ( 18) 00:16:27.096 12.105 - 12.161: 96.4555% ( 17) 00:16:27.096 12.161 - 12.216: 96.6299% ( 12) 00:16:27.096 12.216 - 12.272: 96.8478% ( 15) 00:16:27.096 12.272 - 12.328: 97.0802% ( 16) 00:16:27.096 12.328 - 12.384: 97.1964% ( 8) 00:16:27.096 12.384 - 12.440: 97.2836% ( 6) 00:16:27.096 12.440 - 12.495: 97.5160% ( 16) 00:16:27.096 12.495 - 12.551: 97.7193% ( 14) 00:16:27.096 12.551 - 12.607: 97.8065% ( 6) 00:16:27.096 12.607 - 12.663: 97.9082% ( 7) 00:16:27.096 12.663 - 12.719: 98.0244% ( 8) 00:16:27.096 12.719 - 12.774: 98.0535% ( 2) 00:16:27.096 12.774 - 12.830: 98.1551% ( 7) 00:16:27.096 12.830 - 12.886: 98.2568% ( 7) 00:16:27.096 12.886 - 12.942: 98.3440% ( 6) 00:16:27.096 12.942 - 12.997: 98.4021% ( 4) 00:16:27.096 12.997 - 13.053: 98.4747% ( 5) 00:16:27.096 13.053 - 13.109: 98.5328% ( 4) 00:16:27.096 13.109 - 13.165: 98.6200% ( 6) 00:16:27.096 13.165 - 13.221: 98.6636% ( 3) 00:16:27.096 13.221 - 13.276: 98.7653% ( 7) 00:16:27.096 13.276 - 13.332: 98.7943% ( 2) 00:16:27.096 13.332 - 13.388: 98.8234% ( 2) 00:16:27.096 13.388 - 13.444: 98.8379% ( 1) 00:16:27.096 13.444 - 13.499: 98.8524% ( 1) 00:16:27.096 13.499 - 13.555: 98.9250% ( 5) 00:16:27.096 13.555 - 13.611: 98.9541% ( 2) 00:16:27.096 13.611 - 13.667: 99.0558% ( 7) 00:16:27.096 13.667 - 13.723: 99.0994% ( 3) 00:16:27.096 13.723 - 13.778: 99.1139% ( 1) 00:16:27.096 13.778 - 13.834: 99.1284% ( 1) 00:16:27.096 13.834 - 13.890: 99.1575% ( 2) 00:16:27.096 13.890 - 13.946: 99.1865% ( 2) 00:16:27.096 13.946 - 14.002: 99.2010% ( 1) 00:16:27.096 14.002 - 14.057: 99.2156% ( 1) 00:16:27.096 14.057 - 14.113: 99.2301% ( 1) 00:16:27.096 14.113 - 14.169: 99.2737% ( 3) 00:16:27.096 14.280 - 14.392: 99.3173% ( 3) 00:16:27.096 14.392 - 14.504: 99.4189% ( 7) 00:16:27.096 14.504 - 14.615: 99.4335% ( 1) 00:16:27.096 14.615 - 14.727: 99.4625% ( 2) 00:16:27.096 14.727 - 14.838: 99.5061% ( 3) 00:16:27.096 14.838 - 14.950: 99.5497% ( 3) 00:16:27.096 14.950 - 15.061: 99.5642% ( 1) 00:16:27.096 15.061 - 15.173: 99.6223% ( 4) 00:16:27.096 15.173 - 15.285: 99.6368% ( 1) 00:16:27.096 15.285 - 15.396: 99.6659% ( 2) 00:16:27.096 15.508 - 15.619: 99.6949% ( 2) 00:16:27.096 15.731 - 15.842: 99.7240% ( 2) 00:16:27.096 15.842 - 15.954: 99.7385% ( 1) 00:16:27.096 16.623 - 16.735: 99.7531% ( 1) 00:16:27.096 17.851 - 17.962: 99.7676% ( 1) 00:16:27.096 18.743 - 18.855: 99.7821% ( 1) 00:16:27.096 18.966 - 19.078: 99.7966% ( 1) 00:16:27.096 19.078 - 19.189: 99.8112% ( 1) 00:16:27.096 19.636 - 19.747: 99.8257% ( 1) 00:16:27.096 19.747 - 19.859: 99.8547% ( 2) 00:16:27.096 19.970 - 20.082: 99.8693% ( 1) 00:16:27.096 20.193 - 20.305: 99.8983% ( 2) 00:16:27.096 20.417 - 20.528: 99.9128% ( 1) 00:16:27.096 21.421 - 21.532: 99.9274% ( 1) 00:16:27.096 22.536 - 22.648: 99.9419% ( 1) 00:16:27.096 23.652 - 23.764: 99.9564% ( 1) 00:16:27.096 28.784 - 29.007: 99.9709% ( 1) 00:16:27.096 29.007 - 29.230: 99.9855% ( 1) 00:16:27.096 78.542 - 78.989: 100.0000% ( 1) 00:16:27.096 00:16:27.096 00:16:27.096 real 0m1.526s 00:16:27.096 user 0m1.024s 00:16:27.096 sys 0m0.501s 00:16:27.096 02:41:13 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.096 02:41:13 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:16:27.096 ************************************ 00:16:27.096 END TEST nvme_overhead 00:16:27.096 ************************************ 00:16:27.096 02:41:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:27.096 02:41:13 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:16:27.096 02:41:13 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:16:27.096 02:41:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.096 02:41:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.096 ************************************ 00:16:27.096 START TEST nvme_arbitration 00:16:27.096 ************************************ 00:16:27.096 02:41:13 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:16:28.065 EAL: TSC is not safe to use in SMP mode 00:16:28.065 EAL: TSC is not invariant 00:16:28.065 [2024-07-25 02:41:14.721274] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:32.244 Initializing NVMe Controllers 00:16:32.244 Attaching to 0000:00:10.0 00:16:32.244 Attached to 0000:00:10.0 00:16:32.244 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:16:32.244 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:16:32.244 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:16:32.244 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:16:32.244 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:16:32.244 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:16:32.244 Initialization complete. Launching workers. 00:16:32.244 Starting thread on core 1 with urgent priority queue 00:16:32.244 Starting thread on core 2 with urgent priority queue 00:16:32.244 Starting thread on core 3 with urgent priority queue 00:16:32.244 Starting thread on core 0 with urgent priority queue 00:16:32.244 QEMU NVMe Ctrl (12340 ) core 0: 6315.33 IO/s 15.83 secs/100000 ios 00:16:32.244 QEMU NVMe Ctrl (12340 ) core 1: 6306.00 IO/s 15.86 secs/100000 ios 00:16:32.244 QEMU NVMe Ctrl (12340 ) core 2: 6296.67 IO/s 15.88 secs/100000 ios 00:16:32.244 QEMU NVMe Ctrl (12340 ) core 3: 6298.00 IO/s 15.88 secs/100000 ios 00:16:32.244 ======================================================== 00:16:32.244 00:16:32.244 00:16:32.244 real 0m4.931s 00:16:32.244 user 0m13.153s 00:16:32.244 sys 0m0.807s 00:16:32.244 02:41:18 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:32.244 02:41:18 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:16:32.244 ************************************ 00:16:32.244 END TEST nvme_arbitration 00:16:32.244 ************************************ 00:16:32.244 02:41:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:32.244 02:41:18 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:16:32.244 02:41:18 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:32.244 02:41:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:32.244 02:41:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:32.244 ************************************ 00:16:32.244 START TEST nvme_single_aen 00:16:32.244 ************************************ 00:16:32.244 02:41:18 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:16:32.812 EAL: TSC is not safe to use in SMP mode 00:16:32.812 EAL: TSC is not invariant 00:16:32.812 [2024-07-25 02:41:19.701244] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:33.072 Asynchronous Event Request test 00:16:33.072 Attaching to 0000:00:10.0 00:16:33.072 Attached to 0000:00:10.0 00:16:33.072 Reset controller to setup AER completions for this process 00:16:33.072 Registering asynchronous event callbacks... 00:16:33.072 Getting orig temperature thresholds of all controllers 00:16:33.072 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:33.072 Setting all controllers temperature threshold low to trigger AER 00:16:33.072 Waiting for all controllers temperature threshold to be set lower 00:16:33.072 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:33.072 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:16:33.072 Waiting for all controllers to trigger AER and reset threshold 00:16:33.072 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:33.072 Cleaning up... 00:16:33.072 00:16:33.072 real 0m0.792s 00:16:33.072 user 0m0.000s 00:16:33.072 sys 0m0.784s 00:16:33.072 02:41:19 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.072 02:41:19 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:16:33.072 ************************************ 00:16:33.072 END TEST nvme_single_aen 00:16:33.072 ************************************ 00:16:33.072 02:41:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:33.072 02:41:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:16:33.072 02:41:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:33.072 02:41:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.072 02:41:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.072 ************************************ 00:16:33.072 START TEST nvme_doorbell_aers 00:16:33.072 ************************************ 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:33.072 02:41:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:33.642 EAL: TSC is not safe to use in SMP mode 00:16:33.642 EAL: TSC is not invariant 00:16:33.642 [2024-07-25 02:41:20.360067] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:33.642 Executing: test_write_invalid_db 00:16:33.642 Waiting for AER completion... 00:16:33.642 Asynchronous Event received. 00:16:33.642 Error Informaton Log Page received. 00:16:33.642 Success: test_write_invalid_db 00:16:33.642 00:16:33.642 Executing: test_invalid_db_write_overflow_sq 00:16:33.642 Waiting for AER completion... 00:16:33.642 Asynchronous Event received. 00:16:33.642 Error Informaton Log Page received. 00:16:33.642 Success: test_invalid_db_write_overflow_sq 00:16:33.642 00:16:33.642 Executing: test_invalid_db_write_overflow_cq 00:16:33.642 Waiting for AER completion... 00:16:33.642 Asynchronous Event received. 00:16:33.642 Error Informaton Log Page received. 00:16:33.642 Success: test_invalid_db_write_overflow_cq 00:16:33.642 00:16:33.642 00:16:33.642 real 0m0.606s 00:16:33.642 user 0m0.068s 00:16:33.642 sys 0m0.561s 00:16:33.642 02:41:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.642 02:41:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:16:33.642 ************************************ 00:16:33.642 END TEST nvme_doorbell_aers 00:16:33.642 ************************************ 00:16:33.642 02:41:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:33.642 02:41:20 nvme -- nvme/nvme.sh@97 -- # uname 00:16:33.642 02:41:20 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:16:33.642 02:41:20 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:33.642 02:41:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:33.642 02:41:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.642 02:41:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.642 ************************************ 00:16:33.642 START TEST bdev_nvme_reset_stuck_adm_cmd 00:16:33.642 ************************************ 00:16:33.642 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:16:33.902 * Looking for test storage... 00:16:33.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=68409 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 68409 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 68409 ']' 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.902 02:41:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:33.902 [2024-07-25 02:41:20.751636] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:33.902 [2024-07-25 02:41:20.751883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:34.471 EAL: TSC is not safe to use in SMP mode 00:16:34.471 EAL: TSC is not invariant 00:16:34.471 [2024-07-25 02:41:21.191765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.471 [2024-07-25 02:41:21.311765] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:34.471 [2024-07-25 02:41:21.311788] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:34.471 [2024-07-25 02:41:21.311795] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:16:34.471 [2024-07-25 02:41:21.311800] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:16:34.471 [2024-07-25 02:41:21.315950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.471 [2024-07-25 02:41:21.316262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.471 [2024-07-25 02:41:21.316110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.471 [2024-07-25 02:41:21.316261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.729 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.729 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:16:34.729 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:16:34.729 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.729 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:34.729 [2024-07-25 02:41:21.623080] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:34.987 nvme0n1 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:34.987 true 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721875281 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=68421 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:34.987 02:41:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:37.521 [2024-07-25 02:41:23.816787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:16:37.521 [2024-07-25 02:41:23.816870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:37.521 [2024-07-25 02:41:23.816882] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:37.521 [2024-07-25 02:41:23.816890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.521 [2024-07-25 02:41:23.817651] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.521 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 68421 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 68421 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 68421 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:37.521 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.TLeimu 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.29e2hB 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 68409 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 68409 ']' 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 68409 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 68409 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:16:37.522 killing process with pid 68409 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68409' 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 68409 00:16:37.522 02:41:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 68409 00:16:37.522 02:41:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:16:37.522 02:41:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:16:37.522 00:16:37.522 real 0m3.813s 00:16:37.522 user 0m12.149s 00:16:37.522 sys 0m0.860s 00:16:37.522 02:41:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.522 02:41:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:16:37.522 ************************************ 00:16:37.522 END TEST bdev_nvme_reset_stuck_adm_cmd 00:16:37.522 ************************************ 00:16:37.522 02:41:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:37.522 02:41:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:16:37.522 02:41:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:16:37.522 02:41:24 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:37.522 02:41:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.522 02:41:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.522 ************************************ 00:16:37.522 START TEST nvme_fio 00:16:37.522 ************************************ 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:16:37.522 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:37.522 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:16:37.522 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:37.522 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:37.781 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:16:37.781 02:41:24 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:16:37.781 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:16:37.781 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:16:37.781 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:16:37.781 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:37.781 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:16:38.041 EAL: TSC is not safe to use in SMP mode 00:16:38.041 EAL: TSC is not invariant 00:16:38.041 [2024-07-25 02:41:24.911032] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:38.299 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:38.300 02:41:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:16:38.867 EAL: TSC is not safe to use in SMP mode 00:16:38.868 EAL: TSC is not invariant 00:16:38.868 [2024-07-25 02:41:25.708346] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:39.127 02:41:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:16:39.127 02:41:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:39.127 02:41:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:16:39.127 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:39.127 fio-3.35 00:16:39.127 Starting 1 thread 00:16:39.695 EAL: TSC is not safe to use in SMP mode 00:16:39.695 EAL: TSC is not invariant 00:16:39.695 [2024-07-25 02:41:26.316968] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:42.228 00:16:42.228 test: (groupid=0, jobs=1): err= 0: pid=101562: Thu Jul 25 02:41:28 2024 00:16:42.228 read: IOPS=50.3k, BW=197MiB/s (206MB/s)(393MiB/2001msec) 00:16:42.228 slat (nsec): min=437, max=39466, avg=517.17, stdev=209.40 00:16:42.228 clat (usec): min=275, max=4420, avg=1271.20, stdev=206.54 00:16:42.228 lat (usec): min=275, max=4460, avg=1271.72, stdev=206.60 00:16:42.228 clat percentiles (usec): 00:16:42.228 | 1.00th=[ 963], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:16:42.228 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1303], 00:16:42.228 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1434], 95.00th=[ 1483], 00:16:42.228 | 99.00th=[ 2073], 99.50th=[ 2474], 99.90th=[ 3326], 99.95th=[ 3621], 00:16:42.228 | 99.99th=[ 4293] 00:16:42.228 bw ( KiB/s): min=191920, max=205600, per=99.83%, avg=200933.33, stdev=7807.42, samples=3 00:16:42.228 iops : min=47980, max=51400, avg=50233.33, stdev=1951.85, samples=3 00:16:42.228 write: IOPS=50.3k, BW=196MiB/s (206MB/s)(393MiB/2001msec); 0 zone resets 00:16:42.228 slat (nsec): min=455, max=18187, avg=932.71, stdev=320.40 00:16:42.228 clat (usec): min=266, max=4390, avg=1270.39, stdev=209.27 00:16:42.228 lat (usec): min=267, max=4395, avg=1271.32, stdev=209.37 00:16:42.228 clat percentiles (usec): 00:16:42.228 | 1.00th=[ 955], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:16:42.228 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1254], 60.00th=[ 1303], 00:16:42.228 | 70.00th=[ 1336], 80.00th=[ 1385], 90.00th=[ 1434], 95.00th=[ 1483], 00:16:42.228 | 99.00th=[ 2073], 99.50th=[ 2507], 99.90th=[ 3359], 99.95th=[ 3687], 00:16:42.228 | 99.99th=[ 4228] 00:16:42.228 bw ( KiB/s): min=191024, max=204704, per=99.42%, avg=199842.67, stdev=7650.55, samples=3 00:16:42.229 iops : min=47756, max=51176, avg=49960.67, stdev=1912.64, samples=3 00:16:42.229 lat (usec) : 500=0.07%, 750=0.13%, 1000=2.10% 00:16:42.229 lat (msec) : 2=96.56%, 4=1.12%, 10=0.03% 00:16:42.229 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=2 00:16:42.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:42.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.229 issued rwts: total=100692,100558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.229 00:16:42.229 Run status group 0 (all jobs): 00:16:42.229 READ: bw=197MiB/s (206MB/s), 197MiB/s-197MiB/s (206MB/s-206MB/s), io=393MiB (412MB), run=2001-2001msec 00:16:42.229 WRITE: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=393MiB (412MB), run=2001-2001msec 00:16:42.489 02:41:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:16:42.489 02:41:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:16:42.489 00:16:42.489 real 0m4.915s 00:16:42.489 user 0m2.728s 00:16:42.489 sys 0m2.124s 00:16:42.489 02:41:29 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.489 02:41:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:16:42.489 ************************************ 00:16:42.489 END TEST nvme_fio 00:16:42.489 ************************************ 00:16:42.489 02:41:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:16:42.489 00:16:42.489 real 0m26.913s 00:16:42.489 user 0m33.078s 00:16:42.489 sys 0m11.952s 00:16:42.489 02:41:29 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.489 02:41:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:42.489 ************************************ 00:16:42.489 END TEST nvme 00:16:42.489 ************************************ 00:16:42.750 02:41:29 -- common/autotest_common.sh@1142 -- # return 0 00:16:42.750 02:41:29 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:16:42.750 02:41:29 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:42.750 02:41:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:42.750 02:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.750 02:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:42.750 ************************************ 00:16:42.750 START TEST nvme_scc 00:16:42.750 ************************************ 00:16:42.750 02:41:29 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:16:42.750 * Looking for test storage... 00:16:42.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:42.750 02:41:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.750 02:41:29 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.750 02:41:29 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.750 02:41:29 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.750 02:41:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:42.750 02:41:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:42.750 02:41:29 nvme_scc -- paths/export.sh@4 -- # export PATH 00:16:42.750 02:41:29 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:16:42.750 02:41:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:16:42.750 02:41:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.750 02:41:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:16:42.750 02:41:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:16:42.750 02:41:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:16:42.750 00:16:42.750 real 0m0.236s 00:16:42.750 user 0m0.126s 00:16:42.750 sys 0m0.181s 00:16:42.750 02:41:29 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.750 02:41:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:16:42.750 ************************************ 00:16:42.750 END TEST nvme_scc 00:16:42.750 ************************************ 00:16:43.010 02:41:29 -- common/autotest_common.sh@1142 -- # return 0 00:16:43.010 02:41:29 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:16:43.010 02:41:29 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:16:43.010 02:41:29 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:16:43.010 02:41:29 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:16:43.010 02:41:29 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:16:43.010 02:41:29 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:43.010 02:41:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:43.010 02:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.010 02:41:29 -- common/autotest_common.sh@10 -- # set +x 00:16:43.010 ************************************ 00:16:43.010 START TEST nvme_rpc 00:16:43.010 ************************************ 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:16:43.010 * Looking for test storage... 00:16:43.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:43.010 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:43.010 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:43.010 02:41:29 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:16:43.270 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:16:43.270 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68663 00:16:43.270 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:16:43.270 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:43.270 02:41:29 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68663 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 68663 ']' 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.270 02:41:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.270 [2024-07-25 02:41:29.951869] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:43.270 [2024-07-25 02:41:29.952218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:43.529 EAL: TSC is not safe to use in SMP mode 00:16:43.529 EAL: TSC is not invariant 00:16:43.529 [2024-07-25 02:41:30.376708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.788 [2024-07-25 02:41:30.493404] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:43.788 [2024-07-25 02:41:30.493429] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:43.788 [2024-07-25 02:41:30.496383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.788 [2024-07-25 02:41:30.496379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.047 02:41:30 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.047 02:41:30 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:16:44.048 02:41:30 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:16:44.307 [2024-07-25 02:41:30.996107] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:16:44.307 Nvme0n1 00:16:44.307 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:16:44.307 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:16:44.566 request: 00:16:44.566 { 00:16:44.566 "bdev_name": "Nvme0n1", 00:16:44.566 "filename": "non_existing_file", 00:16:44.566 "method": "bdev_nvme_apply_firmware", 00:16:44.566 "req_id": 1 00:16:44.566 } 00:16:44.566 Got JSON-RPC error response 00:16:44.566 response: 00:16:44.566 { 00:16:44.566 "code": -32603, 00:16:44.566 "message": "open file failed." 00:16:44.566 } 00:16:44.566 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:16:44.566 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:16:44.566 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:44.566 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:44.566 02:41:31 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68663 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 68663 ']' 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 68663 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 68663 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:16:44.566 killing process with pid 68663 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68663' 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@967 -- # kill 68663 00:16:44.566 02:41:31 nvme_rpc -- common/autotest_common.sh@972 -- # wait 68663 00:16:45.135 00:16:45.135 real 0m2.119s 00:16:45.135 user 0m3.519s 00:16:45.135 sys 0m0.720s 00:16:45.135 02:41:31 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.135 02:41:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.135 ************************************ 00:16:45.135 END TEST nvme_rpc 00:16:45.135 ************************************ 00:16:45.135 02:41:31 -- common/autotest_common.sh@1142 -- # return 0 00:16:45.135 02:41:31 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:45.135 02:41:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:45.135 02:41:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.135 02:41:31 -- common/autotest_common.sh@10 -- # set +x 00:16:45.135 ************************************ 00:16:45.135 START TEST nvme_rpc_timeouts 00:16:45.135 ************************************ 00:16:45.135 02:41:31 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:16:45.394 * Looking for test storage... 00:16:45.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68700 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68700 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68728 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:16:45.394 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68728 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 68728 ']' 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.394 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:45.394 [2024-07-25 02:41:32.073644] Starting SPDK v24.09-pre git sha1 c8a637412 / DPDK 24.03.0 initialization... 00:16:45.394 [2024-07-25 02:41:32.073879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:45.654 EAL: TSC is not safe to use in SMP mode 00:16:45.654 EAL: TSC is not invariant 00:16:45.654 [2024-07-25 02:41:32.519030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:45.913 [2024-07-25 02:41:32.635750] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:45.913 [2024-07-25 02:41:32.635776] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:16:45.913 [2024-07-25 02:41:32.638799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.913 [2024-07-25 02:41:32.638758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.172 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.172 02:41:32 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:16:46.172 Checking default timeout settings: 00:16:46.172 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:16:46.172 02:41:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:46.431 Making settings changes with rpc: 00:16:46.431 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:16:46.431 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:16:46.689 Check default vs. modified settings: 00:16:46.689 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:16:46.689 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:16:46.948 Setting action_on_timeout is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:16:46.948 Setting timeout_us is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:16:46.948 Setting timeout_admin_us is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68700 /tmp/settings_modified_68700 00:16:46.948 02:41:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68728 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 68728 ']' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 68728 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 68728 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:16:46.948 killing process with pid 68728 00:16:46.948 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68728' 00:16:46.949 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 68728 00:16:46.949 02:41:33 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 68728 00:16:47.517 RPC TIMEOUT SETTING TEST PASSED. 00:16:47.517 02:41:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:16:47.517 00:16:47.517 real 0m2.247s 00:16:47.517 user 0m3.830s 00:16:47.517 sys 0m0.810s 00:16:47.517 02:41:34 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.517 02:41:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:16:47.517 ************************************ 00:16:47.517 END TEST nvme_rpc_timeouts 00:16:47.517 ************************************ 00:16:47.517 02:41:34 -- common/autotest_common.sh@1142 -- # return 0 00:16:47.517 02:41:34 -- spdk/autotest.sh@243 -- # uname -s 00:16:47.517 02:41:34 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:16:47.517 02:41:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:47.517 02:41:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.517 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.517 02:41:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:16:47.517 02:41:34 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:16:47.517 02:41:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:16:47.517 02:41:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:16:47.517 02:41:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:16:47.517 02:41:34 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:16:47.517 02:41:34 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:16:47.517 02:41:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:47.517 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:47.517 02:41:34 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:16:47.517 02:41:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:47.517 02:41:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:47.517 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:48.087 setup.sh cleanup function not yet supported on FreeBSD 00:16:48.087 02:41:34 -- common/autotest_common.sh@1451 -- # return 0 00:16:48.087 02:41:34 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:16:48.087 02:41:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.087 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:48.087 02:41:34 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:16:48.087 02:41:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.087 02:41:34 -- common/autotest_common.sh@10 -- # set +x 00:16:48.348 02:41:35 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:48.348 02:41:35 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:48.348 02:41:35 -- spdk/autotest.sh@391 -- # hash lcov 00:16:48.348 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:16:48.348 02:41:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.348 02:41:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:16:48.348 02:41:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.348 02:41:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.348 02:41:35 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:48.348 02:41:35 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:48.348 02:41:35 -- paths/export.sh@4 -- $ export PATH 00:16:48.348 02:41:35 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:16:48.348 02:41:35 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:16:48.348 02:41:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:16:48.348 02:41:35 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721875295.XXXXXX 00:16:48.348 02:41:35 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721875295.XXXXXX.fESHJXXZ9U 00:16:48.348 02:41:35 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:16:48.348 02:41:35 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:16:48.348 02:41:35 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:16:48.348 02:41:35 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:16:48.348 02:41:35 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:16:48.348 02:41:35 -- common/autobuild_common.sh@463 -- $ get_config_params 00:16:48.348 02:41:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:16:48.348 02:41:35 -- common/autotest_common.sh@10 -- $ set +x 00:16:48.608 02:41:35 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:16:48.608 02:41:35 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:16:48.608 02:41:35 -- pm/common@17 -- $ local monitor 00:16:48.608 02:41:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:48.608 02:41:35 -- pm/common@25 -- $ sleep 1 00:16:48.608 02:41:35 -- pm/common@21 -- $ date +%s 00:16:48.608 02:41:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721875295 00:16:48.608 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721875295_collect-vmstat.pm.log 00:16:49.548 02:41:36 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:16:49.548 02:41:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:16:49.548 02:41:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:16:49.548 02:41:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:16:49.549 02:41:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:16:49.549 02:41:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:16:49.549 02:41:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:16:49.549 02:41:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:16:49.549 02:41:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:16:49.549 02:41:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:16:49.549 02:41:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:49.549 02:41:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:49.549 02:41:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:49.549 02:41:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:49.549 02:41:36 -- pm/common@44 -- $ pid=68959 00:16:49.549 02:41:36 -- pm/common@50 -- $ kill -TERM 68959 00:16:49.549 + [[ -n 1283 ]] 00:16:49.549 + sudo kill 1283 00:16:49.558 [Pipeline] } 00:16:49.578 [Pipeline] // timeout 00:16:49.584 [Pipeline] } 00:16:49.602 [Pipeline] // stage 00:16:49.608 [Pipeline] } 00:16:49.626 [Pipeline] // catchError 00:16:49.636 [Pipeline] stage 00:16:49.638 [Pipeline] { (Stop VM) 00:16:49.655 [Pipeline] sh 00:16:49.939 + vagrant halt 00:16:51.876 ==> default: Halting domain... 00:17:13.841 [Pipeline] sh 00:17:14.124 + vagrant destroy -f 00:17:16.659 ==> default: Removing domain... 00:17:16.670 [Pipeline] sh 00:17:16.953 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:17:16.963 [Pipeline] } 00:17:16.981 [Pipeline] // stage 00:17:16.987 [Pipeline] } 00:17:17.007 [Pipeline] // dir 00:17:17.014 [Pipeline] } 00:17:17.033 [Pipeline] // wrap 00:17:17.041 [Pipeline] } 00:17:17.058 [Pipeline] // catchError 00:17:17.069 [Pipeline] stage 00:17:17.071 [Pipeline] { (Epilogue) 00:17:17.087 [Pipeline] sh 00:17:17.372 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:17.385 [Pipeline] catchError 00:17:17.387 [Pipeline] { 00:17:17.404 [Pipeline] sh 00:17:17.690 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:17.690 Artifacts sizes are good 00:17:17.699 [Pipeline] } 00:17:17.712 [Pipeline] // catchError 00:17:17.722 [Pipeline] archiveArtifacts 00:17:17.727 Archiving artifacts 00:17:17.778 [Pipeline] cleanWs 00:17:17.788 [WS-CLEANUP] Deleting project workspace... 00:17:17.788 [WS-CLEANUP] Deferred wipeout is used... 00:17:17.795 [WS-CLEANUP] done 00:17:17.797 [Pipeline] } 00:17:17.815 [Pipeline] // stage 00:17:17.822 [Pipeline] } 00:17:17.839 [Pipeline] // node 00:17:17.845 [Pipeline] End of Pipeline 00:17:17.883 Finished: SUCCESS